FWIW here's how I resolve the article's questions currently:
1. Energy usage: Using electricity is not unethical. For many reasons we need to transition Earth to use clean energy. This is not a reason to not use new technology, nor is deep learning particularly wasteful like blockchains are.
2. Training data: Seems reasonable to see AI training as fair use. If you disagree, that's fine too, but the legal specifics of the origin of dataset isn't really fundamental to AI training so we can likely just train only on properly sourced data in some way. Many AI use cases don't have this problem anyway, like self-driving.
3. "Replacing people": I don't think you have a right to continue doing the same work your whole life if someone invents a more efficient way of doing it. We shouldn't build an economy around a busywork jobs guarantee. Plus, it's plausible this concern is a moot point, as Noah Smith argues here https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-... But I'm happy to do UBI or whatever, though UBI may not be as effective as one would hope.
4. "Incorrect information and bias". Doesn't strike me as a fundamental problem (models will get much better at this), nor an ethical question. Just know the information may be incorrect. Most of it isn't. And LLMs and AI have many use cases beyond retrieving reference information, even if most people just see Google search AI blowing it constantly.
5. Concentrating power: Seems like close-to-state-of-the-art LLMs will be commodified, etc. If it really becomes important I don't see why you couldn't just rustle up some donations for a truly open model or whatever.
> so we can likely just train only on properly sourced data in some way
But are we? I _can_ source chocolate and coffee from sources that don't use slave labor, but it becomes very important whether you _are_. You can't just shove the moral implications under the rug because an ethical option _exists_. Obviously this is less extreme than my chocolate/coffee example, but considering this is kinda the point of the questions being asked.
3 & 4 & 5 you just completely disregard as issues?!
"3 is not a problem given we do <highly controversial X>" - but what if we don't?
"4 is not a problem because we'll just get better" - but what about now when it isn't and what if we don't?
"5 is not a problem - just make X available and be rich"
3. So almost all other software work. Software engineering is, fundamentally, about replacing people.
4. Is plain bullshit concern, really nothing to address here. People also always hallucinate, in the same sense LLMs do, and for the same reason.
5. Same could be said about trains or airplanes - or most of every item one has or sees in a 21st century urban setting.
Sure, those are all "but X is worse!" arguments, but they make sense because there's no reason to single out AI for those reasons and stay ethically consistent without giving up on almost everything else in your life.
I didn't downvote you; I think your response is a good road to walk down here.
Replacing people:
There's probably a lot of nuance here, but I would rephrase: tech is about empowering people to do something else. There's this TED talk guy with good graphs from years ago and I was watching one of his subsequent, less famous talks where he extols the washing machine as being uniquely liberating for women practically everywhere. But of course this doesn't always happen. We've also come up with very good extractive tech, which has the opposite effect in developing countries with a single valuable resource--maybe someone would like to be a software engineer or whatever, but the only reasonable thing to do in terms of economics, opportunities, or networks is join the local cobalt mine.
So, I wouldn't say software engineering is about replacing people. It's letting them do something else. Whether or not that something else is better is a societal problem. Also, scale matters. Did email and calendaring software replace lots of assistants? For sure. Were assistants the entire world economy? No. Put these two things together, and you start to see where AI companies are running aground. They're not empowering people, rather they're empowering companies to replace them. It'd be one thing if Claude were like, "hey software engineers, just flip me on, pay me $200/mo, and I'll earn you $300k/yr". That's not the pitch. Second, it's not just software engineers, or artists, or whatever. It's everyone. Again if the idea were "wow robots will do all the crap in our lives and we'll just work on our novels or weird hobby programming languages" OK, but again that's not at all the pitch.
Hallucinating:
Sure people hallucinate (I'm gonna say "are fallible" or "make mistakes" when it comes to people), but we expect them to. That's why someone doesn't just say, "hey camgunz, build a website" or if they do say that I don't just crank out the 1st draft without even loading it up and say, "OK, here is your website". Software engineering is programming over time, so we have processes like code review, CI/CD, testing, specifications, design, blah blah blah, because humans make mistakes. When I build websites I do as much of that myself as possible, as well as other quality assurance stuff, because I make mistakes.
But the pitch with AI is, "just dump stuff in the text box and poof", and somewhere in tiny font or buried in fine print is, "sometimes we goof, you should check things". But that's completely antithetical to the product because the more work you ask the AI to do, the more you have to check. If I'm just like "hey write me a fast inverse square root function" I have like 12 lines to check. If I'm like, "hey could you build me a distributed key value store" imagine the code I have to review and the concepts I have to understand (C++?) in order to really check the work.
But that can be fine! In the hypothetical "two Rust engineers + Claude build a distributed key value store" that's a big productivity win. It seems totally fine to use AI to do things you could do yourself, only way faster. But again that's not the pitch, it's, "this will let you do things you could never yourself do, like build a distributed key value store without learning systems programming; fire all your engineers".
Concentration of power:
Yes but we have lots of regulation--national and international--about this, because without it things were very bad (the industrial revolution was no picnic).
---
> Sure, those are all "but X is worse!" arguments, but they make sense because there's no reason to single out AI for those reasons and stay ethically consistent without giving up on almost everything else in your life.
Again scale matters. There's a clear difference between word processing software replacing typists and AI software replacing everyone working in a cognitive profession.
When I think about it I like to swap in physical professions, like the automating of factory work or trades work. Automating factory work was pretty bad; sure prices went down, but we disempowered an entire class of people and that's had pretty negative consequences.
Automating trades work has been really good though, and IMO the difference is that the benefits from automating factory work went to factory owners rather than workers, whereas benefits from automating trades work went to tradespeople. I think 90% of my issues with AI would go away if we said something like companies with > $2M ARR are barred from using it, rather their employees can use it and cannot be discriminated against for doing so, can't be required to reveal that they're using it, etc.
---
Finally, a lot of armchair AI analysis (this isn't disparaging; I'm in this crew) is like at the level of economic widgets and simple graphs or whatever. The pragmatic vision for SWEs is basically we all use some AI assistant and we're 10x more productive, even though we have to check the work of AI basically constantly.
But if software engineering becomes "AI output verification", I won't choose to be a software engineer anymore, because that's not the fun part. I don't know how many people will want to be AI output verifiers. The level of social change this threatens is monumental; one starts imagining a world where people just kind of lounge in sunny parks pursuing their dreams, but in truth I think the future is closer to us just reading reams of AI-generated whatever checking it for errors. Sure maybe I'd like to be a software engineer or a playwright, but the only economically reasonable thing for me to do is just read AI-generated React code. Pretty grim.
The energy usage concern is also funny in that it doesn't try to compare the energy usage of a human doing the same task (if one were to not use LLM to do the task). If we assume it's true that asking ChatGPT a question costs 3 bottles of water, you should take into account how long that question takes to answer by a human doing the research. If it takes you a couple of hours, you need to include the food and drink intake it takes to power yourself for a couple of hours. If it's anything like beef or almonds it takes way more than 3 bottles of water.
> How did we go from energy consumed to human time?
Directly. Energy is consumed by humans over time.
That time may be spent on doing tasks. Spending that time doing task increases energy consumption over baseline (though humans have substantial energy storage, so, for tasks shorter than couple hours, the cost is usually paid after such tasks).
> The caloric needs of a human are minuscule compared to the electricity converted to waste heat to train and operate an LLM.
The costs of operating: not really, and that's the whole point of comparison. Sure, humans are energy-efficient in some aspects - but the brains can only do so much. There's plenty of work that LLMs do faster and better than humans, so much better that no human can match them. And speed means more tasks, so energy use per task is lower.
Also, if you're talking about cost of operating LLMs in general - i.e. keeping them running and doing arbitrary work, you really need to compare it against costs of keeping a human alive and available to do the work. So it's not just food, but also hygiene - including e.g. delivering water, heating water for laundry and showers, etc. - and similar consumables.
As for training costs, that must be a joke. Do you know how much it costs to train a human? Energy or otherwise? As a parent of three kids I'll tell you: a lot.
If you include the energy a human uses from birth to ~18, and then energy used while they're trained for whatever jobs you want to compare against AI, then the comparison gets even worse for humans.
The humans already exist. We’re neither creating nor destroying them for the purpose of doing an LLM equivalent task, nor am I bathing or clothing myself to think up an answer to something, and evaluating the problem as though we are is a sick joke. This is the most unserious argument in favor of AI I’ve seen to date.
The scenario is a task needs to be done. Whether it's for business, personal, whatever. It's a task. It needs to be done somehow. It can be done by a human, or by AI.
We are calculating the cost of doing this task. I don't know how else we can be more clear.
Honestly I'm very unconvinced by this article author's sense of need to calculate the energy cost of the LLM as an argument against it. But if we're using it as an argument, it's only fair to compare how else the task can be completed (i.e. by the human) and what it would cost. What is there to not understand about this comparison?
> We’re neither creating nor destroying them for the purpose of doing an LLM equivalent task
Sure, so drop the training costs on both sides.
Imagine company sends you on a business trip to a different country, for a month-long stay. Okay, you exist and have to eat and sleep anyway, but I can't imagine you accepting this as an argument from your employer - "we don't need to reimburse your hotel or food or other expenses, after all, we're neither creating nor destroying you for purpose of this assignment, and you'll be eating and sleeping and bathing anyway".
No, we don’t have to create an LLM to do a task. That’s a choice with costs we can evaluate.
If you really don’t understand how corporate travel arrangements work, I’ll clue you in:
Your employer reimburses your hotel even though you’d be sleeping anyway because your place to sleep doesn’t exist in the place they’re sending you. They pay your meals because eating out is more expensive than eating at home and your food and kitchen at home doesn’t exist in the place they’re sending you and also traveling is a pain so you deserve to treat yourself a little. They don’t generally reimburse your laundry costs unless you have stay more days than one can reasonably carry clean clothes for in a standard suitcase. Your bathing needs come free with the hotel.
I hope this helps you navigate the strange and confusing world we live in.
Thank you for the explanation. It matches my experience of having been sent across the world on business trips that lasted a month per stay (yes, standard suitcase doesn't fit enough clean clothes for those).
I don't see your problem with the comparisons. We don't have to hire people and train them for the job either. GPUs already exist and could be running something else than LLMs (say, crypto miners).
I feel it's you who's having problems drawing boundaries to allow for an apples-to-apples comparison.
The people are there expensing energy even if you don’t hire them, but we can choose not to expend the energy to make an LLM. I don’t know how to explain it any more simply than that.
but you can't delete people? That's actually the big problem that need to be addressed imo. It happen now (or later) with llm, but in the past it have been a problem with industrialization (big unemployment of workers) and then globalization (unemployment of more local workers).
If it's not addreased correctly, llm won't be a progress for humanity.
But if you want to compare it "technically", maybe it's better to look at computer usage now vs with llm maybe (how many google request, sim failed, screen on etc.)
That is exactly what I am trying to point out; that considering the energy usage of LLM is a level of psychopathy, because that LLM is doing productive work that a human otherwise would.
Using any resource without care is unethical. It doesn’t matter how abundant or free it is. We don't have infinite resources. And using AI unnecessarily or inefficiently or abusively is a waste of resource and therefore unethical.
Information is a resource. And bottling it up is unethical.
Eliminating people means freeing up resources. So that is ethical. Firing people is unethical. Designate people for other tasks.
Bad resources are the responsibility of the person who uses it. Taking shortcuts is unethical.
Concentration of resources is unethical.
My reflection on AI is from the perspective of resource management. Ethics should be concerned with waste.
I try to challenge people's concerns with 1 in an even stronger way:
We're seeing evidence that the future power needs of AI may drive our civilization to start a revolution in high tech green energy, such as nuclear. If the power needs of AI end up being 10x the power needs of all other uses for that power, and we satisfy that power requirement using nuclear, then it will be much much easier to phase out heavy polluting sources of energy.
Yup. Using energy is not unethical on its own. And even with the climate crisis, if anything history tells us about how humanity works (which is not something that's going to change any time soon), the best way out of the crisis involves using even more energy.
To take a tangent off of 3: Automation eliminated art and craft in a lot of areas. Suddenly it's about economies of scale and things get optimised into Good Enough mass products with little to no variety. Consider architecture (bland), clothing (low durability), food (fast food, frozen food, worse nutrition, bland taste) etc etc.
Sure, people happily buy all this stuff, it works. But I can't help but feel our stuff is a lot worse than it used to be. I bet I'm not alone.
So in that sense, automation has a downside compared to work done by humans that care about their craft and quality. Craft and quality is harder to scale, so we scale mediocre things. Whether or not humans use machines and tools for their work is not the issue. That tools and machines combined with economic pressures lead us down a path of producing worse stuff is.
You can look at it through a lense of "if it fulfills the function, who cares". But I for one do prefer having unique, well crafted, quality stuff. But the market pressures work against that.
Now my outlook is not as bleak as I make it sound. There's still a lot of quality stuff being made with love.
But my point is: Just because we can seemingly meet requirements with the cheapest most scalable approach, doesn't mean it's the best we can do. It's just the most Good Enough we can do.
> Consider architecture (bland), clothing (low durability), food (fast food, frozen food, worse nutrition, bland taste) etc etc. Sure, people happily buy all this stuff, it works. But I can't help but feel our stuff is a lot worse than it used to be. I bet I'm not alone.
For every intricate handcrafted cathedral or mansion, there were millions of one room huts with leaky roofs. A change of clothes used to be an investment along the lines of what an expensive phone or even a car is for us today, so most people only had one or two changes of clothes, and not in very good condition. Food used to take up most of a person's wages and a bad harvest could cause immediate starvation for large numbers of people. Also, most people a diet consisting almost entirely of grains.
So, yea, if you are imagining some kind of past where you happened to be an aristocrat, you could live in a beautifully handcrafted house, wear only hand-tailored clothes, and eat heirloom vegetables for every meal. But if you're imagining yourself as an aristocrat, then you can live pretty well with that status in the modern era too.
This one isn't from automation. There are a lot of compounding reasons, but one primary one for the lack of ornamentation on modern buildings is the use of caulk to speed up the building process (which itself is still generally done by hand), eliminating many of the layered weather-sealing structural elements from previous building styles.
> But I can't help but feel our stuff is a lot worse than it used to be.
You can still get stuff that's "as good as it used to be" (or rather, as good as the small percentage of items that survived over decades). You're just going to be paying the same kind of prices for that quality, e.g. $500 for a KitchenAid stand mixer or $7000 for a Miele oven.
Fair point about architecture: It's not fundamentally about automation, it's about efficiency (which automation typically supports). If you want to do more with the same amount of people (or the same with less people), quite often, something's gonna give.
Getting stuff as good as it used to be can be pretty tricky. I'd have to do a 30 minute drive to buy bread that's actually baked by a trained baker. Most of the bread bakeries carry is made industrially.
The thing is: If things get cheaper, the consumers usually don't benefit primarily from that.
For one, the company that reduces production costs will do what they can to keep the difference. If competition drives prices down and consumers actually end up needing less money, watch the salaries shrink (effectively), because to a degree they are tied to the cost of living.
I'm not sure I'm getting this right, but I think it's worth considering.
According to some quick research (grain of salt, used ChatGPT deep research) KitchenAid stand mixers cost over $3000 (inflation adjusted) when released in 1919, and perhaps $1500 (inflation adjusted) in the 1970s, so they too are cheaper than ever at $500 today!
They are also shittier, but that's not the direct consequence of automation - that's a consequence of the market optimizing away quality over time as much as it possibly can. Happens with or without automation.
It's also often impossible to get high-quality goods at any price (or at least cheaper than it would take to contract a factory to make a small batch of a custom product, and then pay for QA, which I guess is possible... if you're a billionaire), and it's impossible for most people to even verify they're getting quality for price - both of these are, again, consequences of how competition works over time in the market economy, and happen with or without automation.
Another thing to consider here is that lower quality goods will get thrown out more, which is going to create a lot of waste, especially when done at scale.
But conversely higher quality goods may take more energy and material to produce. It's actually not obvious how this maths out! You'd have to look at it empirically.
I could not disagree more. Our "stuff" is so much better than it was before. Would you choose to drive a 1982 Chrysler La Baron versus a 2025 Toyota Prius? A 2025 Prius is objectively a better car than any car in existence in the 1980s. Clothes last longer and are made more efficiently than in the 1990s which places less burden on the environment. Technology? Two devices replace a dozen 1970's devices (stereo, record player, phone, ...). Cookware is made more efficiently out of better materials. You don't need forever chemical teflon to have a non-stick pan these days.
Do you have any insight on which kind of clothes last longer today? I thought that the rush to "fast fashion" and the shrinking budget for clothing had actually led to the opposite effect [1]. I'd also say that kitchen appliances and home entertainment devices (stereo, record player...) were often kept for decades while today's mobile phones, tablets, streamers, projectors, multi-room entertainment systems etc. are thrown out after 5-10 years because they broke down, their software does not get any updates any more or they don't support the latest and greatest standards any more.
Can't help but go off on another tangent here, because I hate modern cars with a degree of passion. My car is 17 years old and still works fine with yearly service. It doesn't patronise me, track me or provide terrible UX because some corporate marketing person had a questionable vision.
But I attribute that to enshittification. Car technology surely improved. It's just that, to me, the downsides that were introduced outweigh the advantages of newer technology.
> But I can't help but feel our stuff is a lot worse than it used to be.
You are probably wrong though, and you have to consider the benefits of lower cost items in terms of increased availability to the world's poor as well as what saving that money does to allow someone to deploy that money to improve other areas of their life.
There is almost certainly a better version of everything available to you at a cheaper price than 50 years ago, and a much cheaper version at a much cheaper price that is almost or about as good that billions of new people now have access to.
1) Agree that energy use is ethical. In fact, in so much as higher wealth and income is linked at the societal level to better outcomes such as longer life, lower crime, happier lives, etc., and wealth/income are strongly correlated to energy use, especially once we do the accounting for outsourced manufacturing, I'm willing to go so far as to say that in most circumstances, asking a person to lower their energy use is UNETHICAL. We should, of course, attempt to minimize externalities and harm that come from energy use, like pollution, habitat destruction, etc.
Also, just as a thing, not all public blockchains use a ton of energy. Since ETH (the number 2 blockchain) transitioned to proof of stake from proof of work a few years back, it uses practically no energy. Certainly it uses much less than other peer to peer networks, because it processes less data.
2) Agree.
3) Not only is it a good thing to replace jobs or partial jobs with automation, it is the only way that humanity moves forward for the long run! Otherwise, most of us would be dead of childhood diseases and over 90% of the remainder would be working hard manual labor jobs on farms. Certainly we need to practice harm reduction for the job functions and jobs eliminated! But only through automation do we improve materially as a species! (I'm not making a spiritual argument about the value of working on a farm vs a modern life, but material outcomes are much better and improve the more jobs we eliminate!)
4) Many tools have downsides. No tools are in all ways better than the old way of doing things. You learn to live with the bad if it's worth the good.
5) Given the open source nature of many LLMs, the ease with which state of the art LLMs appear to be copied, and the proliferation of various models that have been tuned for various tasks, I would say that LLMS distribute power rather than concentrating it!
In other words, the blog is basically wrong on every count.
Energy use is clearly neither ethical nor unethical in the abstract.
> "But only through automation do we improve materially as a species!"
Technology is not the only cause of change, and not all changes caused by technology are improvements. I believe the changes caused by LLMs are strict setbacks for humanity as a whole, but we will see.
>Not only is it a good thing to replace jobs or partial jobs with automation, it is the only way that humanity moves forward for the long run!
The fact that this has been true so far doesn't mean it always will be. It seems perfectly possible to me that there is an actual peak, after which human happiness will decline with increased advancements.
I’m pro non-fossil fuel power generation but I have issues with offshore wind. Right now we don’t have good ways of disposing of the turbine blades and the noise does have a negative effect on sea life. I would rather see wind farms on land, so I don’t see this as a bad thing.
Ultimately though, I’m pro-nuclear and eager to see what comes of small modular reactors.
And there’s the added problem of having all this big machinery and electrical equipment atop the ocean, which is the most insidiously corrosive thing on the planet.
1. Consuming something that has an impact on the environment is an ethical issue. And I think it is unethical in this case. They are coming up with a power hungry system that consumes large amounts of energy—and they are doing it for profits. I think we can agree that they don't really have any desire to give back to the environment for what they consume.
You might ask, was industrial revolution's impact on environment unethical, despite where it brought us? Of course it was unethical! If we had the opportunity to go back in time and make it better, we would! Except of course profit hungry people would still find ways to circumvent our new policies, and gain an economic edge over competitors by consuming more energy and thus, again, harming the environment. The same is happening with LLMs. People who seek profit are even building their own grids which generate electricity by burning fuel, just to run their data centers. They don't care about the impact it has on the environment in which we all live, and that is unethical.
And what if the need of power will bring about new tech that will allow green energy much more efficiently? Like fusion reactors that work. Sure, nobody is stopping you to develop those! But you cannot take a bet on people's, animals' and plants' well being and "have confidence" that it will all work out in the end! You cannot take a bet on people's lives for a promised future. Who has such a right? What an entitlement...
There is only one way to make this ethical: Will you take accountability on your shoulders, if it all fails? Then go be my guest. But no, if it all goes down, and instead we end up with a dystopian AI that can mass control everybody, and washed up shores and a world that became a desert, then every single person who has been responsible for the mess will go be free, while some innocent scapegoats will take the toll. So no, personally I am not giving consent for profit seeking people to take a bet on my planet's future.
2. If you can see the data with your eyes, then you should be able to train an AI with it. However when you commercialize your AI and chant non-stop about how all the writers and artists who contributed to your AI model were nothing other than a stepping stone for you to create your model, it turns into an attitude that is unethical.
But the owners of the model spend billions of dollars to train the models, so shouldn't they have the right to sell it? Well, if they had any _morals_, they could train smaller models patiently that could be build on nothing other than a research's fund. Then we could have developed bit quantization to make it cheaper. Then we could have come up with better models by distilling them to make the LLMs better and so on. But no, there are profits to be made and no time to loose! These needless ethics shall not hinder the progress (of making as much money as possible as soon as possible)...
3. Progress in tech will inevitably bring about disrupting changes to the society. I also see no problem with this. However I must point that, pro-AI people have a disrespect for people's hard earned skills. I sense a grudge in these people that they really want AI's to nullify people's skills, skills that they themselves are too lazy to develop. In AI marketing, I also sense a similar strategy, i.e. their selling point is "now everybody can become an artist"... No, they can't. In my opinion this point has less to do with ethics. Overtime we will all see whether these models live up to the hype. At the end the natural cycle of life will sort itself out.
4. I don't see your point. I think it is both a fundamental problem and an ethical issue. Let me explain.
Imagine that there is an ultimate Truth to something. We don't know what it is, but we have an idea about it. Now imagine that you are a bad actor who wished ill on people. You develop smart sounding propaganda to steer public opinion away from the Truth.
An LLM with bias is exactly that. So it is an ethical problem.
It is also a fundamental problem, that is, whether AI gives correct or incorrect answer. If it is giving incorrect answers, then it is unremarkable. Currently, as far as my observations go, LLM's generate both (1) grammatically correct sentences and (2) sentences that seem contextually accurate. Whether the content of their saying are correct or not, really seems to be depending on how much of the requested Truth already exists in the Internet. So great, we invented a system that can recycle what is said on the Internet.
Don't get me wrong: I do think that it is a great advance in information processing, linguistics and machine learning. But for some reason, we don't seem to be ready to acknowledge it's shortcomings as a serious problem. Instead, there seems to be a general opinion that "it will get better". No, the thing is that, if it hits a point in development where it generates enough value for the commercial companies, we will end up with a smart sounding propaganda machine that can control uninformed masses in a scale that was unimaginable even for the most pessimistic person.
5. -> Seems like close-to-state-of-the-art LLMs will be commodified, etc.
The reason why I don't like your attitude is because of this. Yes, indeed, models will be developed by the open source crowd and will be in the hands of many. Surely progress in tech will bring about this.
And yet. This has nothing to do with what the author is saying. She is saying that _these LLMs_ made by _these companies_ (you know which ones we are talking about) will be used to concentrate power. Commodification of other models does not negate her point.
-> If it really becomes important I don't see why you couldn't just rustle up some donations for a truly open model or whatever.
I will shut up about this point when _it happens_. Until then, people who agree with me must keep bringing these issues to people's attention. AI has been used for controlling masses long before LLMs. Now it has been supercharged by LLMs because they can interpret human language pretty well. Therefore I think the blog post's author has a good point on this issue.
Overall I see hand waiving and weak dismissal of her points.
1. I don't think using power for AI is more unethical than any other non-life-critical usage of the same amount of power. Since data centers use electricity, there are already many green technologies which can power them. People are just porting this argument from blockchain where it was much more valid. The hard parts of making the planet carbon-neutral lie elsewhere.
2. ??? This just seems like you're complaining about your impression about annoying AI posters on Twitter or whatever. Also nobody is stopping anybody from developing a cheaper AI or whatever, and that would be a very valuable thing to do.
3. I don't think its disrespectful to say you've made it easier to make art or whatever, and I also don't think it's unethical to "disrespect" "hard earned skills".
4. Yes, it would be unethical to develop an AI to lie to people. It's not unethical to develop an AI that might be wrong sometimes, especially if you tell people about that very clearly.
5. If the complaint is just that you don't like particular companies, OK, fine. I don't really find that to be a very interesting discussion. I was talking about whether using LLMs was ethical, which is the title of the post.
- facial recognition software used in cameras in public spaces
- mass monitoring phone calls, text and processing them
- predictive policing (has been criticized heavily, currently this is a popular topic in ethics of policing to train young philosophers on the issue. Last I heard, police departments in US have claimed to have opted out of this. Maybe they have better ways now?)
All these examples are from the west. If you never heard about any of this, you would be terrified at what they do in China, for instance.
Usually you would need to process data you collected and rely on some primitive indications to identify dissidents to the system. LLMs are a game changer on this aspect. When you know, you know :) ...
Sure, a local model may use a maximum of 65 watts during inference.
However, an H100 runs on 350 watts. Even accounting for the fact that a larger model will need more than one slice, it’s so much faster that this almost balances out.
That’s not the largest difference, though. The H100 can run multiple user queries in parallel, and because LLM inference is limited by memory bandwidth, using large batch sizes this way will dramatically improve efficiency. In short, it takes less power to run it in the datacenter.
To put it in perspective I've previously calculated that eating a single beef burger is worse for the environment in terms of CO2 produced than using a high token I/O "reasoning" AI model all day long.
The "AI uses a lot of electricity and creates a lot of carbon" crowd are typically way off when it comes to understanding the environmental impact of AI compared to the environmental impact of other typical daily activities.
I don't have much stake in stopping AI usage vs other ways to reduce emissions, but I watched the video. Sasha did a different paper than the one that article refers to. She doesn't really dispute her numbers, she just says it depends on a lot of factors. She does explicitly say it's a non-negligible amount of water, and in other interviews she asks for transparency from closed source AI providers.
As for the article, it makes a lot of assumptions. At one point they say speed is a marker of energy usage? I think they meant two models with the same speeds but different costs are a good measure for energy usage, but they hadn't mentioned cost up until this point. So this is an odd paragraph.
> GPT-3.5 is widely estimated to be in the ~20B parameter range. GPT-4o is similar in speed to GPT-3.5, which I think is a fair proxy for total power usage (i.e. time spent in GPUs). That’s another 10x improvement right out of the gate.
Regardless, I would disagree.
- Different hardware will use different amounts of energy for the same speed. We don't know anything about the hardware our sessions are run on except what they have told us.
- Just because the cost is cheaper doesn't mean it uses less energy. There's reason to think they are offering a deal to pull users in and burning cash on energy, though I'd agree it's not the full 10x.[1]
LLM training has nothing in common with search engine web crawlers, from an ethical standpoint.
Web crawlers index your content and direct people to your website, giving the writer some credit (arguably a lot of credit).
LLMs gorge themselves on everything youve ever written in order to improve their ability to regurgitate/mimmick intelligence, while rarely ever giving credit to anyone at all.
I think the ability to block ai from including your work in its training data is important. Pointless? Maybe. But it should be an option for those who feel strongly enough about it.
And this doesn't even scratch copyleft. Getting credit is nice and all but i publish my stuff under AGPL because i want my users to have the right to modify the software, even if somebody else builds on top of my work.
LLMs are being used to launder my code right now and take away my users' right.
There were cases where LLMs regurgitated functions verbatim, which caused a bit of uproar on Twitter and that's it. There should be massive lawsuits that take illegally trained LLMs deep into unprofitability.
But i guess the west is competing with China now and morality or even legally goes out the widow in a war.
It seems like your claiming that the perceived benefits of eating meat (by those who do) exactly equals the benefits of using ChatGPT. Is that right? If so, I disagree.
I think he's saying that eating beef in particular is a tremendous amount of CO2 emissions (comparable to driving). Eating other meats has a much lower impact. Not eating meat is even much lower.
So one question: No one probably benefits much from beef (vs, say, chicken). So if I were to recast the parent: If you're concerned about energy usage from LLMs, you'll have a larger impact if you stop eating beef (with little impact to your health).
I think the claim is more about people's consistency in their choice. "If you care so much, start to [insert any environmental friendly acivity] and then we can discuss about LLM environmental impact".
> Another technology that uses a lot of energy is blockchains. I think using public blockchains is almost universally unethical since there are other, better, less harmful options. Part of the harm from blockchains is an absurd amount of energy usage.
I'm not filled with confidence in the author after reading this. The absolutely overwhelming majority of blockchain transactions today occur on non-PoW chains which do not consume large amounts of electricity.
> When we put people out of work, we—both society and technologists—have an ethical responsibility to ensure there's a plan to mitigate the harm from that.
This is my major disagreement with the argument. I think she's giving too much moral weight on what the technologists creating the technology, or even using the technology, have to do here.
I don't think it was incumbent on the scientists first discovering electricity, to figure out how to mitigate the vast societal upheaval of all the jobs that would be lost because of it. I don't think I've ever stopped eating things grown in a field because the technology for farming has made 99% of farmers lose their job, with one person working the field with specialized machines doing the job of dozens. Nor have I expected the people creating new farming technologies to worry about the ethical implications of making farmers' jobs easier.
I'm pretty sure, in fact, that most of the world just says "thank you" that more food can be grown and harvested, and far more cheaply at that.
I do think society has to worry about this issue, both morally and practically, but not specifically the people creating the technology. In fact, I think it's a bit of hubris we in the technology community have, that we think because we created a technology, we are best suited to understand the implications of it or how to deal with it on a socio-economic-political level, when there are entire fields of study about those topics which most technologists are not acquainted with.
In case this helps with ethical concerns, one way to use LLMs more ethically is to use an open source LLM. For example, AI2 has a model called OLMo (https://allenai.org/blog/olmo2) that is open everything. Their stance is that truly open source models are released with weights, training data, code, and evaluation in full, and thus can be fully inspected and reproduced.
They recently also released an iOS app with a variant of this model, called OLMoE (https://allenai.org/blog/olmoe-app), that uses a small model that can do on-device inference, for private and secure use. Also the app itself is open source.
I refuse to use it because it offloads human cognition into an external prosthetic. Computers and writing are already playing with fire. I’m being unironic because I’m already preempting the “writing is obviously a pure good” comments
Back then when film was invented, people believed that photography stole a part of their soul every time they was captured and replicated. Since then, artist stop with attempt to paint realistic reproductions on canvas and all kind of *ism was a reaction (impressionism, expressionism...).
We live in similar times. AI is new tool that opens new possibilities and force us about rethinking about certain things, such as education, authenticity of consumed media, meaning of art, etc.
But you can't wear shoes, or buy clothing either. Everything I wear was literally made by people being exploited.
Everything I eat, was picked or produced by people being exploited .
To function in any way under capitalism is to do so unethically. Because capitalism demands it.
Llms are effectively a giant hodgepodge of stolen data, commodified. There's no way around it .
However, if LLMS help doctors cure diseases, people learn math, and a variety of other good things, it becomes unethical to not use them .
It's a paradox. And a contradiction, all in one. I don't think software engineers who outright refuse to use LLMS will have jobs much longer. Even if it's just printing out some boilerplate code, it's just where the industry's heading.
Though from the objectivist standpoint, maybe the real question the blogger is asking is not whether this will lead to exploitation of others, but whether it will lead to exploitation of oneself. Will it lead to mass exploitation of developed-world non-billionaires? And if so, are you being ethical to yourself if you use it?
The question is probably is a net good. Which no one can say right now. Will LLMs cure cancer or be used in Defense industries. The answer is both. But when you think about it, modern medicine originates as an invention of war, the original idea was to come up with better ways to treat injured soldiers. If you find a cure for disease while trying to wage war, and then that cure saves millions of people, you can't say the war was good. But if it's going to happen, there should be some benefits from it.
I have a piece of paper and a pen. I can write a nice letter to a friend, send it and brighten their day. Or I can use it to write a really mean letter. Do we say pens are unethical ?
Yeah, there's a bit of a prisoner's dilemma there too. If everyone agreed not to use the thing then everyone would be better off (assuming this is what the author is suggesting), but individually you're better off using it in either scenario. So do you contribute to the thing that leads to your own downfall?
While each is concerning independently, I see the feedback cycle of 2,3,5 as being the biggest concern. 2: As systems get more powerful, they'll take training data off everything available. On the computer? They'll be capturing screenshots. Walking down the street? They'll be capturing video, GPS, etc. 3: It won't take them long to figure out what you do, and do it for cheaper. Everything you think you're good at will be useless. 5: We might try to pass regulations to prevent that from happening, but those who control these AI systems will have too much power, and their only incentive will be to use it to create a surveillance state, feeding back into 2,3.
Less concerned about 1 (long-term, I think a clean power source will be found) and 4 (they'll get better, unless the feedback cycle above creates a motive for intentional disinformation. And hopefully we'll get better at knowing when to trust them in either case).
I think people should stop to use banks and electronic payments, they use too much energy, and also gaming on GPU, too much energy.. and also to use USD it funds too many wars.
I have a suspicion there is massive (organized?) downvoting going on in the past few weeks or months but hard to prove since hn doesn't make enough data visible.
Thanks for your comment and sharing that link! Very useful website.
I also have a similar suspicion. Feels like if you have billions of dollars of investment in AI, then crashing things in the internet that criticize you, can easily be solved for a few millions. If this is the case, for me, it is the biggest proof that AI won't live up to the hype.
I wanted to say it doesn't have to be planned from the top because people naturally want to flock to a cause. But I also remembered a certain billionaire paying people to play computer games under his account to boost his stats. Paying people or bots to downvote criticism is certainly not below him or any of the others, it's just marketing expenditure.
Not sure what the stigma around blockchain energy usage is?
Other than bitcoin which remains a large proof of work network, most of the large public blockchains now use a proof of stake mechanism or other forms of consensus that can and do operate on modest hardware.
Yes, running them does require some hundreds of machines, but not more than your typical silicon valley darling like Uber or Airbnb would use to run their infrastructure.
This is a dilemma I'm deferring making a decision about on my most recent product (named Preppy [1]). I created an app which simplifies meal planning by letting you add ingredients for any online recipe to your grocery store's shopping cart for pickup or delivery.
It uses LLMs for many things including parsing ingredients from the site.
The ethical dilemma is that the recipe website owners have spent time creating and publishing these recipes. They make their money by people visiting their site and clicking on ads. My app reduces a user's dependency on their recipe site once they've added a recipe to Preppy.
I'm genuinely interested in ways my app can benefit users and recipe website owners. I don't know what that solution is yet but would love to figure this out. Generally speaking, I don't think these recipe website owners are going to come out ahead with the availability of LLMs.
Sure, that's one way of looking at it. My product may fall flat on it's face (most likely outcome) so I wasn't interested in solving the problem before I had it.
But my mindset isn't YOLO. I genuinely would like to find a mutually beneficial way for it to work. Lots of ideas like revenue share but I'm hoping other ideas emerge.
The most straightforward way would probably be using your model to learn what recipes to serve to what users, but then requiring users to leave your app (or just open a window in-app) that loads the website so they can use the recipe. That wouldn't even necessarily be a bad thing; it could open the door for you include recipes delivered in non-text format, too, assuming you can figure out how to digest them.
+1 - There's already integrated search in the app where users can search for recipes from the web and preview the recipe site where ads are displayed. They don't have to leave the app to do it if they don't have a URL to paste in.
How about share revenue with them? Sadly if you don't do it, someone else will (not condoning it). You might as place yourself as a channel for them and incentivised to create more recipes for you?
Agreed. The brave/BAT model fits here: create an "account" for each recipe author (could just be an entry in the database), and set aside money for each of them. Then reach out to each one and say: "hey do you want this money?"
I imagine it would be cumbersome to manage many relationships with recipe authors and that the actual payments made would be small enough that it might not be worth the effort to set up.
Imagine being a recipe author and having 15 or 20 AI app authors trying to each give you $0.25 a month. At some point it makes sense to handle flows like that without requiring the humans to explicitly set them up, at which point you might as well use BAT, but since so many are allergic to crypto--with good reason--maybe that's a non starter also.
It's less of a robust meal planner. It's meant to get the ingredients from an online recipe into your Kroger/Safeway/etc cart for pickup. We use it to find 4-5 meals we want to cook for the week and it takes about 15 minutes instead of an hour.
1. Energy usage: Using electricity is not unethical. For many reasons we need to transition Earth to use clean energy. This is not a reason to not use new technology, nor is deep learning particularly wasteful like blockchains are.
2. Training data: Seems reasonable to see AI training as fair use. If you disagree, that's fine too, but the legal specifics of the origin of dataset isn't really fundamental to AI training so we can likely just train only on properly sourced data in some way. Many AI use cases don't have this problem anyway, like self-driving.
3. "Replacing people": I don't think you have a right to continue doing the same work your whole life if someone invents a more efficient way of doing it. We shouldn't build an economy around a busywork jobs guarantee. Plus, it's plausible this concern is a moot point, as Noah Smith argues here https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-... But I'm happy to do UBI or whatever, though UBI may not be as effective as one would hope.
4. "Incorrect information and bias". Doesn't strike me as a fundamental problem (models will get much better at this), nor an ethical question. Just know the information may be incorrect. Most of it isn't. And LLMs and AI have many use cases beyond retrieving reference information, even if most people just see Google search AI blowing it constantly.
5. Concentrating power: Seems like close-to-state-of-the-art LLMs will be commodified, etc. If it really becomes important I don't see why you couldn't just rustle up some donations for a truly open model or whatever.