One key line about ATMs is buried deep in the article:
> the number of tellers per branch fell by more than a third between 1988 and 2004, but the number of urban bank branches (also encouraged by a wave of bank deregulation allowing more branches) rose by more than 40 percent
So, ATMs did impact bank teller jobs by a significant amount. A third of them were made redundant. It's just that the decrease at individual bank branches was offset by the increase in the total number of branches, because of deregulation and a booming economy and whatever else.
A lot of AI predictions are based on the same premise. That AI will impact the economy in certain sectors, but the productivity gains will create new jobs and grow the size of the pie and we will all benefit.
My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
For example, ATMs being automated did cause a negative drop in teller jobs, but fast money any time does increase the velocity of money in the economy. It decreases savings rate and encourages spending among the class of people whose money imparts the highest multiplier.
AI does not. All the spending on AI goes to a very small minority, who have a high savings rate. Junior employees that would have productively joined the labor force at good wages, must now compete to join the labor force at lower wages, depressing their purchasing power and reducing the flow of money.
Look at all the most used things for AI: cutting out menial decisions such as customer service. There are no "productivity" gains for the economy here. Each person in the US hired to do that job would spend their entire paycheck. Now instead, that money goes to a mega-corp and the savings is passed on to execs. The price of the service provided is not dropping (yet). Thus, no technology savings is occurring, either.
In my mind, the outcomes are:
* Lower quality services
* Higher savings rate
* K-shaped economy catering to the high earners
* Sticky prices
* Concentration of compute in AI companies
* Increased price of compute prevents new entrants from utilizing AI without paying rent-seekers, the AI companies
* Cycle continues all previous steps
We may reach a point where the only ones able to afford compute are AI companies and those that can pay AI companies. Where is the innovation then? It is a unique failure outcome I have yet to see anyone talk about, even though the supply and demand issues are present right now.
> My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable. The problem with services is that they're typically resistant to productivity growth, and that's finally changing.
If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam.
"Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable."
You've expressed very clearly what LLMs would have to do in order to be economically transformative.
"If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam."
It's not that process innovations are lacking, it's that product innovations are perceived as an indignity by most people. Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
> Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
Is the value in the outcome of receiving medical advice and care, and becoming educated, or is the value just in the co-opting of another human being's attention?
If the value is in the outcome, the means to achieving that aren't of much consequence.
More subtly, what is an education? What is care? As you point out, the LLMs are (or probably will become) perfectly good at the measurable parts of those services; but I think the residual edge of āgoodā education/care is more than just the other humanās co-opted attention.
How many of us have a reminiscence that starts ālooking back, the most life-changing part of my primary or secondary education was ________,ā where the blank is a person, not a curriculum module? How many doctors operate, at least in part, on hunchesāon totalities of perception-filtered-through-experience that they canāt fully put into words?
Iām reminded of the recent account of homebound elderly Japanese people relying on the Yakult delivery lady partly for tiny yoghurt drinks, but mainly for a glimmer of human contact [0]. Although I guess that cuts to your point: the value in that example really is just co-opting another humanās attention.
In most of these caring professions, some of the value is in the measurable outcome (bacterial infection? Antibiotic!), but different means really do create different collections of value that donāt fully overlap (fine, Iāll actually lay off the wine because the doctor put the fear of the lord in me).
I guess the optimistic case is, with the rote mechanical aspects automated away, maybe humans have more time to give each other the residual human elementā¦
The supply/demand picture here is more complicated than it looks.
If AI displaces human educators, yes, their supply shrinks -- but we can't assume what direction its demand will go.
We've seen this pattern before: as recorded music became free, live performance got more expensive, and therefore much less accessible than it used to be.
What's likely to happen is that "worse" (read: AI) education will become much cheaper, while "better" (read: in-person) education that involves human connection-driven benefits will become much less accessible compared to what it is today.
Most people may be consider it a win. It's certainly not a world I'm looking forward to.
Important follow-up to my comment: as fewer people do X -- live music, medicine, education, you name it -- fewer talented people do it as well.
Fields need a large base of participants to produce great ones. This is exactly why software has been so extraordinary over the past 30 years: an unusual concentration of gifted minds across the entire humankind committed themselves to it.
In my view, Bach, Rachmaninoff, Cole Porter equivalents today probably aren't writing symphonies. They've decided to write code for a living. Which is why any Great American Songbook made today won't hold a candle next to one from 1950s.
The premise of your argument is that "the outcome" can be separated from the process. This is true enough for manufacturing bricks: I don't much care what processes was used to create a brick if it has certain a compressive strength, mass, etc.
But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished, even if a distinction in thought is possible among economic theorists.
> But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished
How is that Baumol's argument? How is 'outcome' vs 'process' relevant to his argument at all?
'Cost disease' is just the foundational truth that the cost of the output from industries with stagnant productivity will increase due to the fact that the workers in that industry can be more valuable in other industries, reducing the number of relative workers in the stagnant industry.
If you want to make the output from a stagnant industry available to a broader spectrum of the population then you have to improve the productivity of that industry.
It's very true for healthcare (especially mental healthcare) and education today as well, because for most people, the choice isn't LLM vs. human attention - it's LLM vs. no access at all.
Even if you have perfect medical information and advice through an LLM, can you perform surgery on yourself? Can you prescribe yourself whatever medication you think you need?
For education, if you know as much as the average Harvard grad, can you give yourself a Harvard degree that will be as readily accepted in a job application or raising funds for a new business?
> the value just in the co-opting of another human being's attention?
Thats a weird way of describing it.
A machine telling me to exercise and eat right will be ignored, even if the advice is correct. A person I trust taking me aside, looking me in the eye and asking me the same would be taken far more seriously.
That may well be true if you need to be persuaded to exercise and eat right.
OTOH, if you don't need to be persuaded and just want information on how best to go about doing it, then I think it makes little difference where the information comes from as long as it's of reasonable quality.
It also seems like the value of quality tutoring that doesn't primarily function as social/class signaling goes down as tools capable of automating high quality intellectual work are more widely available.
It depends on outcome again: is the value of tutoring the social class elevation, or is it in the outcome of becoming more skilled and knowledgable?
There's also the deeper philosophical question of what is the meaning of life, and if there's inherent value in learning outside of what remunerative advantages you reap from it.
Well, there's always wars as the way to get rid of people. I really don't rule out that the people that benefit from this sort of thing will purposefully steer the world in that direction because the poor won't have any choice other than to enlist as a way out of their situation, and never mind the consequences. You can already see some of this happening.
You're implying that insurance companies will allow prices to fall and lower their profits. That seems like a really unlikely event in the current economy. They fire a lot of doctors and nurses, but they won't lower prices.
The ACA requires 80-85% of health insurance to go toward medical care (medical loss ratio). The way they work around that is to figure out how to charge more for medical care.
Can a robot write a medicine prescription? A medical procedure prescription? If yes, that would be a game-changer. But the medical insurance providers would be very cautious about honoring these. Then, if things go wrong, what entity would be held accountable for malpractice?
You already can get a good-quality medical advice "for nothing", unless it requires e.g. a blood test. The question is, how actionable such an advice is going to be, and how even the quality is going to be.
By the time it replaces doctors, nobody but today's investors will be able to afford anything at all. The X-shaped economy would have owners in the V and manual laborers (assuming this doesn't translate to gains in automation) in the ^. This outcome is worth avoiding...
Iām sick of this idea that āfreeā services are beneficial to society. There is no such thing as a free lunch; users are essentially bartering their time, attention, IP (contributed content) and personal/behavioral data in exchange for access to the service.
By selling those services at a cost of āfreeā, hyperscalers eliminate competition by forcing market entrants to compete against a unit price of 0. They have to have a secondary business to subsidize the losses from servicing the āfreeā users, which of course is usually targeted advertising to capitalize on the resources paid by users for access. Or simply selling to data brokers.
With the importance of training data and network effects, āfreeā services even further concentrate market power. Everyone talks about how AI is going to take away jobs, but no one wants to confront how badly the anticompetitive practices in big tech are hurting the economy. Less competition means less opportunity for everyone else, regardless of consumer benefit.
The only way it works if the āfreeā service for tutoring or healthcare is through government subsidies or an actual non-profit. Otherwise itās just going to concentrate market power with the megacorps.
This 1000x. "Free" is only a viable business model if the govt funds it. Otherwise, the $$ has to come from somewhere else in the company - how long will it take for the company to lose interest in a loss-leader when they're making $$ from other parts?
Look at all the deprecated Google products. What happens when Gemini-SaaS makes billions from licensing to other companies, and Gemini-Charity-for-the-poors starts losing money?
Sadly, the bigger the $$ in the tech pie, the more we have attracted robber barons, etc.
> Iām sick of this idea that āfreeā services are beneficial to society. There is no such thing as a free lunch; users are essentially bartering their time, attention, IP (contributed content) and personal/behavioral data in exchange for access to the service.
In aggregate, this is true, but there are many ways to game the system to one's advantage and get a true "free lunch." For example, people watching Youtube with an adblocker and logged out don't provide Google with any income or useful telemetry. Likewise you can get practically unlimited GPT/Claude/etc by using multiple accounts.
No, you are misunderstanding th economic principle. There is still a cost associated with serving that user, and the user is still paying for the cost of their internet connection and the opportunity cost of spending time on the service, or of setting up new accounts to get past usage limits. āNo useful telemetryā I donāt really agree with in the YouTube example, as view counts are still vital for their recommendation algorithm.
TINSTAFL has two main implications. First that nothing is free, someone has to pay for it. Second is that money is not the only thing you pay with; every choice has an opportunity cost. Gaming the system costs someone something.
Your argument is (mildly) a variant of the broken window fallacy.
AI will bring about a de-sequestering of talent and resources from some sectors of the economy. It's very difficult to predict where these people and resources will go after that, and what effect that will have upon the world.
On the contrary, humanity spent nearly its entire existence calorically deficit, and until mechanized farming did we finally see health outcomes improve, height increase, IQ increase, and populations explode.
Productivity gains in the case of mechanized labor got everyone out of subsistence farming and into factories.
AI gets everyone out of every job and into nothing.
> It is a unique failure outcome I have yet to see anyone talk about
It seems likely to me that we will reach a violent, bloody revolt before we possibly reach this point. That may be why no one is taking about this failure mode
> We may reach a point where the only ones able to afford compute are AI companies
Nah. I think "good enough AI for 95% of people" will be able to run locally within 3-5 years on consumer-accessible devices. There will be concentration of the best compute in AI companies for training, but inference will always become cheaper over time. Decommissioned training chips will also become inference chips, adding even more compute capacity to inference.
This is like computing once again. In 1990 only the upper class could afford computers, as of 2000 only the upper class owned mobile phones, as of now more or less everyone and their kid has these things.
We were solid middle-middle class and didn't have a computer until 1989, and it was a "free", 2- or 3-year-old computer from my dad's work that they were going to throw away. We absolutely could not have afforded a computer during the 80s.
Even in the 90s, we kept relying on cast-offs from my dad's employer, and when I was preparing to go to college in '99, my parents scrounged to buy me the parts for a computer to build and take to college. But even then, my dad bought the parts at a discount through a former co-worker's consulting company, and vetoed a couple of my more expensive component choices.
And now that I think about it, my first laptop in 2003 was my dad's old work laptop that had been decommissioned.
Computers were roughly ~ $1000 in 1990. How did your lower-middle class family justify a $1000 expenditure inflation adjusted to $2565 today? Average minimum wage in the US is $11.30 so that's 29 days working at minimum wage.
My family was on the border of upper-lower and lower-middle and we bought a computer once and used it for 10+ years. I dumpster dove later to scavenge parts for upgrading until the mid 2000s when cheap computers became available.
Flying? We were solid middle class in the 80s and my first plane flight wasn't until 2001 (and then only because I was away at college and my mother had died suddenly). My parents hadn't flown since the 70s (before my sister and I were born), and even then, that was a rare thing for them.
Our childhood vacations were single-day (so we didn't have to pay for a hotel) road trips to a nearby state to go to an amusement park, or multi-day trips (also within driving distance) where my dad had to go somewhere for work and the hotel was paid for by his employer. It was a huge huge deal for us when, in the late 90s, we drove down to Disney World (a 13-hour drive) for a several-day trip.
And we never traveled around Christmas; that was one of the most expensive times of the year to travel!
Not sure when or where you grew up, but most middle-class folks in the US in the 80s didn't have a lot of discretionary income, and flights were (inflation adjusted) quite a bit more expensive than they are today.
I suspect your family was not as middle class as you think it was. You're describing a very similar childhood to what I had in the late 80s, but we were lower class for sure
I'm not saying that middle class families flew all the time in the 80s, but they absolutely could afford to if they wanted to make it a priority
A cursory google search seems to bear this out. Cheap flights in north america started in 1978 with some air travel deregulation.
I would argue we've even already seen this play out with productivity gains across the economy over the last 40 years. The American middle class has been gradually declining since the '80s. AI seems likely to accelerate that trend for the exact reasons you point out.
A lot of people recognize this pattern even if they can't articulate it, and that's why they hate AI so much. To them, it doesn't matter if AI lives up to the hype or not. Either it does and we're staring down a future of 20%+ unemployment, or it doesn't and the economy crashes because we put all our eggs in this basket.
No matter what happens, the middle class is likely fucked, and anyone pushing AI as "the future" will be despised for it whether or not they're right.
Personally, I think the solution here might be to artificially constrain the supply of productivity. If AI makes the average middle-class worker twice as productive, then maybe we should cut the number of work hours expected from them in a given week.
The complete unwillingness of people in power to even acknowledge this problem is disheartening, and is highly reminiscent of the rampant corruption and wealth inequality of the Gilded Age.
Technological progress that hurts more people than it helps isn't progress, it's class warfare.
The longer we ignore the collapse of the middle class, the angrier the bottom half of the economy will get and the more justified they will feel in enacting retribution. We absolutely have historical precedents for what happens here: The French Revolution, the Gilded Age, etc. People will only tolerate a declining standard of living for so long.
Well, I see I've thoroughly angered the billionaire wannabes. Funny how they never offer any solutions to these problems and just make a stink about them being acknowledged in the first place.
> Technological progress that hurts more people than it helps isn't progress, it's class warfare.
I think this is right. The historical analogue I keep drifting toward is Enclosure. LLM tech is like Enclosure for knowledge work. A small class of capital-holding winners will benefit. Everyone else will mostly get more desperate and dependent on those few winners for the means of subsistence. Productivity may eventually rise, but almost nobody alive today will benefit from it since either our livelihood will be decimated (knowledge workers, for now) or we will be forced into AI slop hell-world where our children are taught by right-wing robo-propagandists, we are surveilled to within an inch of our lives, and our doctor is replaced by an iPad (everyone who isn't fabulously wealthy). Maybe we can eek out a living being the meat arms of the World Mind, or maybe we'll turned into hamburger by robotic concentration camp guards.
Right there with you. Sure, I have gained a lot as a software engineer in the valley (I guess I'm upper-middle class now), but I'd give it up and go right back to lower-middle class (1980s) status I was raised in if it meant my kids could also aspire to a similar lower-middle class life.
This suicide-pact of "either AI goes crazy and 100 people rule the world with 99% of the world's wealth" or "AI fails badly and everyone's standard of living drops 3 levels, except for the 100 people that rule the world with 99% of the world's wealth" is not what I signed up for. Nor is it in any way sustainable or wise.
Too much class distinction / wealth between lower/upper classes, and a surplus of unemployed lower-class men is how many revolts/revolutions/wars have started.
The key distinction the ATM story reveals isn't really about job counts ā it's about what economists call composition vs. level effects.
ATMs didn't just reduce teller headcount per branch. They changed what tellers do. Before ATMs, tellers were mostly cash handlers. After, the remaining tellers shifted toward relationship banking ā account openings, loan discussions, financial advice. The job title survived but the job content was transformed.
The deeper question for AI is whether the same pattern holds when the technology affects cognitive tasks rather than physical ones. ATMs automated a narrow physical routine (dispensing cash), which freed up the human role to emphasize the parts machines couldn't do (relationship judgment, complex problem-solving). AI is different because it targets exactly those higher-order cognitive tasks that humans were "freed up" to do after previous automation waves.
So the real question isn't "will AI create new jobs?" ā it probably will. The question is whether the new tasks humans get pushed into will be higher-value (as happened with ATMs making tellers into advisors) or lower-value (humans relegated to tasks AI can't yet do, which tend to be physical, uncomfortable, or poorly paid).
The ATM precedent is optimistic, but the mechanism that made it work ā automating the simple task so humans could do the complex one ā runs in the wrong direction when the technology specifically targets complex cognitive work.
Is it? Maybe with survivor bias but what about all the laid off tellers? Did their situation improve? Walmart grew a lot over this time period, maybe most of them had to downgrade and be cashiers for a generally bad employer.
Also, and this might be a different analysis and topic, but tellers in the 80s had a pretty good job. It was often a decent wage with a pension and good benefits. Maybe on par with a teacher or government employee - granted not the highest pay but good, was considered a āprofessionā. Compare that to how itās changed, itās a low hourly rate on par or only slightly above retail and fast food work, heavy part-time status so as to avoid paying benefits.
I wouldnāt say that was a great example and is likely to be what may happen elsewhere once the routine work is sufficiently devalued.
Itās not just the economy, the US population increased 20% over that period while the number of tellers dropped by around 16%.
Net result ATMās likely cost ~30-40% or of bank teller jobs.
Population is really important to adjust for in employment statistics. Compare farmers in the USA in 2025 vs 1800, and yes the absolute number is up but the percentage is way down.
IIRC, the way this worked was that by decreasing tellers required per branch, it made a lot more marginal locations pencil out for branches, at a time when the banking industry was expansionary.
This is not so helpful if AI is boosting productivity while a sector is slowing down, because companies will cut in an overabundant market where deflationary pressure exists.
> So, ATMs did impact bank teller jobs by a significant amount.
Did it? This sounds like describing a company opening a new campus as laying off a third of their employees, partly offset by most of them still having the same job in the same company but at a new desk.
We're already seeing large software companies figure out that they don't need 5,000 developers. They probably only need 1,000 or maybe even fewer.
However, the number of software companies being started is booming which should result in net neutral or net positive in software developer employment.
Don't count all those chickens before they hatch. There might be more started but do they all survive? Think back to the dot-com boom/crash for an example of where that initial gold rush didn't just magically ramp forever. There were fits and starts as the usefulness of the technology was figured out.
Why will we need 1000 companies tomorrow to do the same thing that 100 companies are doing today? If they are really so efficient because of AI then won't 10 companies be able to solve the same problems?
Because that car repair company with 3 local stores previously couldn't justify building custom software to make their business more efficient and aligned with what they need. The cost was too high. Now they might be able to.
Plenty of businesses need very custom software but couldn't realistically build it before.
I see no way that company would save more money from hiring an experienced developer compared to paying their yearly invoice on the COTS product doing the same thing today. The only way this works is with a very wage suppressing effect.
Car repair companies wonāt see a meaningful improvement to their bottom line with more custom software. Will it increase the number of cars per employee per day they can repair?
I do bespoke work like this, but mostly to replace software thatās starting to cost mid 5 figure amounts per year for a SaaS setup and the support phone line has been replaced by an LLM chat bot.
This is one of the key "inefficiencies" of the private sector - there might be one winner at the end of the day providing the product that fills the market niche, but there was always multiple competitors giving it a go in the mean time.
A recent example, Mitchell Hashimoto was pointing out that he wasn't "first to market" with his product(s), he was (at least) SEVENTH
Almost tautologically it's not "inefficient" to do so, because free market economics has decided that all the attempts are mathematically worth it, for a high-margin low-marginal-cost product like software.
I'm a little lost as to why seven teams duplicating effort is more "efficient" in any sense of the word than one or two teams working iteratively toward the same goal.
If this were seven government funded teams solving the same problem, people would lose their minds over the 'waste' But when private companies do it, we call it efficient market competition. The duplication is the same - we just frame it differently.
Edit: fixed some typos caused by fat fingers on a phone keyboard
The benefit from having a 5% better product that hundreds of millions of people will use is worth the duplicated effort in the beginning. The numbers just make sense.
>If this were seven government funded teams solving the same problem
The problem here is "government funded" - the trials are not rationalized by free-market economics. That is, a 5% better product in the end would not be worth seven competing developments initially.
Do the booming companies pay the same as the ones who did layoffs? If you're laid off from Meta or other top tier paying company (the behemoths doing layoffs) you might have a tough time matching your compensation.
But do they need to? If a <role X> job at a top tier company making $600k is eliminated and two <role X> jobs at a "more average" company making $300k replace it; is that really a bad thing? Clearly, there's some details being glossed over, but "one job paying more than a person really needs" being replaced by "two jobs, each paying more than a person really needs" might just be good for society as a whole.
It doesn't seem too bad when you cherry pick an outlier example, but what about when the person making $100k now makes $50k?
I'm sure the retort of the AI optimist will be that AI will make the things that person buys cheaper, and there may be truth to that when it comes to things that people buy with disposable income...
But how likely is AI to make actual essentials like housing and food cheaper?
I think this is assuming that the labor market knows how to identify the dirct value of devs. This already seems to be a problem across the board regardless of job role.
I think this is true in the short/medium term, hence the confusing picture of layoffs but growing number of tech roles overall. The limit maybe be just millions of companies with one tech person and a team of agents doing their bidding.
Maybe software engineers will be like your personal lawyer, or plumber. Every business will have a software engineer on dial, whether it's a small grocery store or a kindergarten.
Previously, software devs were just way too expensive for small businesses to employ. You can't do much with just 1 dev in the past anyway. No point in hiring one. Better go with an agency or use off the shelf software that probably doesn't fill all your needs.
Ah, so that explains why job growth is at a steady pace and the software industry hasnāt been experiencing net negative job growth the past year or so.
How silly of me to rely on reality when itās so obvious that AI is benefiting us all.
Anyways, this is the start. Companies are adjusting. You hear a lot about layoffs but unemployments. But we're in a high interest environment with disruptions left and right. Companies are trying to figure out what their strategy is going forward.
I don't expect to see a boom in software developer hiring. I think it'll just be flat or small growth.
We are in negative growth, and the current leadership class keeps talking about all the people they can get rid of.
Look at the Atlassian layoff notice yesterday for example where they lied to our faces by saying they were laying off people to invest more in AI but they totally arenāt replacing people with AI.
> Someone still needs to review and test the code, and if the code is for embedded systems I find it unlikely.
I feel like you didn't understand my comment. I am predicting that there is no code to review. You simply ask the AI to do stuff and it does it.
Today, for example, you can ask ChatGPT to play chess with you, and it will. You don't need a "chess program," all the rules are built in to the LLM.
Same goes for SaaS. You don't need HR software; you just need an LLM that remembers who is working for the company. Like what a "secretary" used to be.
LLM technology will never achieve 100% accuracy in its output. There is an inherent non-determinism. Tasks that require 100% accuracy cannot be handled by LLMs alone. If an LLM is used to replace HR, it will inevitably do something wrong, and a human will need to be in the loop to correct it.
Same goes for chess, there will always be a chance that it makes an illegal move. Same goes for code, there will always be a chance that it produces the wrong code.
Maybe a new AI technology will be developed that doesn't have the innate non-determinism, but we don't have that now.
Because AI agents are tool users. Why does AI need to research 2026 tax code changes and then try to one-shot your taxes when it can just use Turbotax to do it for you? Turbotax has the latest 2026 tax changes coded into the app. I'd feel much more confident if AI uses Turbotax to do my taxes than to try to one-shot it.
> I feel like you didn't understand my comment. I am predicting that there is no code to review. You simply ask the AI to do stuff and it does it.
I didnāt, and thanks for clarifying for me.
This doesnāt pass the sniff test for me though - someone needs to train the models, which requires code. If AI can do everything for you, then whatās the differentiator as a business? Everything can be in chatGPT but thatās not the only business in existence. If something goes wrong, who is gonna debug it? Instead of API requests you would debug prompt requests maybe.
We already hate talking to a robot for waiting on calls, automated support agents, etc. I donāt think a paying customer would accept that - they want a direct line to a person.
I can buy the argument that the backend will be entirely AI and you wonāt need to be managing instances of servers and databases but the front end will absolutely need to be coded. That will need some software engineering - we might get a role that is a weird blend of product + design + coding but that transformation is already happening.
Honestly the biggest change I see is that the chat interface will be on equal footing with the browser. You might have some app that can connect to a bunch of chat interfaces that is good at something, and specializations are going to matter even more.
It was a bit of a word vomit so thanks for coming to my TED Talk.
50 years ago, using a personal computer was an extravagant luxury. Until it wasn't.
30 years ago, carrying a powerful computer in your pocket was unthinkable. Until it wasn't.
Right now, it's cheaper to run your accounting math on dedicated adder hardware. But Llms will only get cheaper. When you can run massive LLMs locally on your phone, it's hard to justify not using it for everything.
Not until power access/generation is MUCH cheaper. Long, long, long way off.
If I can run 50,000 fixed tasks that cost me $0.834/hr but OpenAI is costing $37/hr and the automation takes 40x as long and can make TERRIBLE errors why the fuck would I not move to the deterministic system?
It will never ever be as cheap as as cron job and a shell script. There is a certain limit to how efficient using an LLM to do a job vs using an LLM to create a job is. There is a large distinction in compute and power resources between the two. Don't mistake one for the other.
If I can run 50,000 fixed tasks that cost me $0.834/hr but OpenAI is costing $37/hr and the automation takes 40x as long and can make TERRIBLE errors why the fuck would I not move to the deterministic system?
Because you'll be outcompeted by people who make the best of the nondeterministic system.
Depends. The only predictions I have seen here are the centaurs vs anti centaurs of Doctorow, and even his analysis I find pretty flimsy.
I dont think the race to shove an LLM into everything is going to grow the pie.
But I also dont think it is impossible that a use case will present itself that will create further jobs.
The issue is that its largely unpredictable.
Its a bit like, we are sitting around in the 1950s trying to predict how computers will affect the economy.
It is going to take more than 1 successful deductive leap to get us from 1950s computing -> miniaturisation -> computer in every home -> internet communications.
Every deductive leap we take is extremely prone to being wrong.
We simply cannot lie back and imagine every productive relationship in the economy and then extrapolate every centaur and anti centaur possible for it.
What we do know is that theres a bit of a gold rush to effectively brute force every possible AI variant into every productive relationship in the economy. The fastest way to get the answer to your question is to do it. Possibly the only way to get the answer is to do it.
For instance, someone might imagine LLMs simply eating a whole bunch of service industry jobs. At the same time, theres a mid state where it eats some, but the remaining staff are employed to monitor the LLMs to prevent them handing out free shit to smart shoppers. Its also easy enough to imagine that LLMs never quite get there and the risk is too large for foul play, so they just dont gain that kind of traction. Its also possible to imagine an end state where LLMs can get to 0% risk if they are constantly trained on human data coming from humans doing the same job, and that humans are gainfully employed in parallel with LLMs. Its possible that LLMs are great at business as usual, but the risk emerges when company policies change, and the cost of retraining LLMs makes it impractical for move fast and break things companies to do anything but hire humans. My favourite scenario is one where humans are largely AI assisted, trained on particular people, and theres a massive cybercrime industry built around exfiltrating LLM training weights trained on high functioning humans and deploying them without humans to the third world to help them get 80% of the quality of first world businesses, making them heavily competitive.
Correct. The story isnāt correct even in the original formulation. US population increased by 50% from 1980 to 2010, and the economy became far more financialized. But the number of bank teller jobs barely grew during that period, even before the iPhone.
I go back and forth on this. I relate it to software. I don't think AI can meaningfully write software autonomously. There are people who oversee it and prompt it and even then it might write things badly. So there needs to be a person in the loop. But that person should probably have very deep knowledge of the software especially for say low level coding. But then that person probably developed the knowledge by coding things by hand for a long time. Coding things by hand is part of getting the knowledge. But people especially students rely heavily on AI to write code so I assume their knowledge growth is stunted. I don't know mathematical proofs will help here. The specs have to come from somewhere.
I can see AI making things more productive but it requires humans to be very expert and do more work. That might mean fewer developers but they are all more skilled. It will take a while for people to level up so to speak. It's hard to predict but I think there could be a rough transition period because people haven't caught on that they can't rely on AI so either they will have to get a new career or ironically study harder.
An AIās ability to meaningfully write software autonomously has changed hugely even in the last 6 months. They might still require a human in the loop, but for how long?
Quantitative measures of this are very poor, and even those are mixed.
My subjective assessment is that agents like Copilot got better because of better harnesses and fine tuning of models to use those harnesses. But they are not improving in the direction of labor substitution, but rather in the direction of significant, but not earth-shaking, complementarity. That complementarity is stronger for more experienced developers.
This LLM ability is directly proportional to the quantity of encoded (i.e. documented) knowledge about software development. But not all of the practice has thus been clearly communicated. Much of mastery resides in tacit knowledge, the silent intuitive part of a craft that influences the decision making process in ways that sometimes go counter to (possibly incomplete or misguided) written rules, and which is by definition very difficult to put into language, and thus difficult for a language model to access or mimic.
Of course, it could also be argued that some day we may decide that it's no longer necessary at all for code to be written for a human mind to understand. It's the optimistic scenario where you simply explain the misbehavior of the software and trust the AI to automatically fix everything, without breaking new stuff in the process. For some reason, I'm not that optimistic.
It's probably an 80/20 or 90/10 problem. Tesla FSD also seems amazing to some percentage of the population, but the more widely it get used, the more cracks are appearing.
I am not saying AI's abilities are the shortcoming here. The problem is that people need to trust that software has certain attributes. For now, that requires someone with knowledge to be part of it. It's quite possible development becomes detached from human trust. As I said that would reduce the number of developers but the ones who are left would have to have deep knowledge to oversee it and even that may be gone. Whatever happens in the future, for now I think people will have to level up their knowledge/skills or get a new career and that's probably true for most professions.
And then you let them train themselves and no one notices when they "accidentally" remove the guardrail prompts from the next version. And another 10 years later, almost no one remembers how "The Guardian" learns new things or how to stop it from being evil.
No, I think it's likely that this is the first major productivity boom that won't be followed with a consumption boom, quite the opposite. It'll result in a far greater income inequality. Things will be cheaper but the poor will have fewer ways to make money to afford even the cheaper goods.
It's not that simple. If a poor person makes zero dollars how much of the reduced cost item could they now afford?
We have a massively distorted economy driven by debt financialization and legalised banking cartels. It leads to weird inversions. For example as long as housing gets increasingly expensive at a predictable rate the housing becomes more affordable instead of less as banks are more able to lend money. The inverse is also true, if housing were to drop at a predictable rate fewer people would be able to get a mortgage on the house so fewer people could afford to buy one. Housing won't drop below cost of materials and labor (ignoring people dumping housing to get rid of tax debts as I would include such obligations in the cost of acquisition). Long term it's not sustainable but long term is multi-generational.
Fwiw in places like parts of the midwest housing is below cost of labor and materials. An existing house might be $70k and several bedrooms at that. You just canāt get anything built for that even if you build it all yourself.
I intended to make a weaker claim of āin general long run / maintainableā circumstances and should have done so.
Many low cost areas have bad crime problems, there is another little phenomenon where the wealthy by doing a poor job in governance can increase the price of their assets by making alternative assets (lower cost housing) less desirable due to the increase in crime.
> Housing won't drop below cost of materials and labor
Only if every person born needs to have a brand new house constructed for them.
Not if - you know - people die and don't need a house to live in anymore.
But considering how it's been the past 20 years, I'm starting to expect that a lot of the current elder generation will opt to have their houses burnt down to the ground when they die. Or maybe the banker owned politicians will make that decision for them with a new policy to burn all property at death to "combat injustice". Who knows what great ideas they have?
"will" being the operative word here. High school level Econ makes no promises about WHEN prices adjust. Price setting is a whole science highly susceptible to collusion pressure. Prices generally drop only when the main competition point is price (commodities). In this case the main issue is that AI is commoditizing many if not all types of labor AND product. In a world where nothing has value how does anything get done?
The only solution here is to stop tying people's value to their productivity. That makes a lot of sense in the 1900s but it makes a lot less sense when the primary faucet of productivity is automation. If you insist on tying a person's fundamental right to a decent and secure life to their productivity and then take away their ability to be productive you're left with a permenant and growing underclass of undesirables and an increasingly slim pantheon of demigods at the top.
We have written like, an ocean of scifi about this very subject and somehow we still fail to properly consider this as a likely outcome.
They key is to do it by setting up the right structure or end up with it naturally, not by laws and control, because then you end up in a oppressive nanny state at the very best.
> They key is to do it by setting up the right structure or end up with it naturally
This is extremely hand-wavy.
Can you be more concrete in what you think this looks like?
The way I see it, we're only 5-10 years away from having general purpose robots and AI that can basically do anything. If the prices for that automation is low enough, there will be massive layoffs as workers are replaced.
There's no way to "naturally" solve the problem of skyrocketing unemployment without government involvement.
Speaking of fairytales, you're living in your own.
Disconnecting value from productivity sounds good if you don't examine any of the consequences.
Can you build a society from scratch using that principle? If you can't then why would it work on an already built society?
Like if we're in an airplane flying, what you're saying is the equivalent getting rid of the wings because they're blocking your view. We're so high in the sky we'd have a lot of altitude to work with, right?
Imagine a society where one person produces all the value. Their job is to do highly technical maintenance on a single machine that is basically the Star Trek replicator: it produces all the food, clothing, housing, energy, etc. that is enough for every human in this society and the surplus is stored away in case the machine is down for maintenance, which happens occasionally. Maintaining the machine takes very specialized knowledge but adding more people to the process in no way makes it more productive. This person, letās call them The Engineer, has several apprentices who can take over but again, no more than 5 because you just donāt need more.
In this society there is literally nothing for anyone else to do. Do you think they deserve to be cut out of sharing the value generated by The Engineer and the machine, leaving them to starve? Do you think starving people tend to obey rules or are desperate people likely to smash the evil machine and kill The Engineer if The Engineer cuts them off? Or do you think in a society where work hours mean nothing for an average person a different economic system is required?
For something to be deserved, it must be earned. What do these people do to distinguish themselves from The Engineerās pets? If they are wholly dependant on him for their subsistence, what distinguishes him from their god?
To derive an alternate system you need alternate axioms. The axioms of our liberal society are moral equality and peaceful coexistence. Among such equals, no one person, group, or majority has the right to dictate to another. What axioms do you propose that would constrain The Engineer? How would you prevent enslaving him?
> For something to be deserved, it must be earned.
Eeeeeerrrr, wrong! This is garbage hypercapitalist/libertarian ideology.
Did you earn your public school education? Did you earn your use of the sidewalk or the public parks and playgrounds? Did you earn your library card? Did you earn your citizenship or right to vote? Did you earn the state benefits you get when you are born disabled? Did you earn your motherās love?
No, these are what we call public services, unalienable rights, and/or unconditional humanity. We donāt revolve the entire world and our entire selves solely around profit because itās not practical and itās empty at its core.
Arguably we still do too much profit-based society stuff in the US where things like healthcare and higher education should be guaranteed entitlements that have no need to be earned. Many other countries see these aspects of society as non-negotiable communal benefits that all should enjoy.
In this hypothetical society with The Engineer, itās likely that The Engineer would want or need to win over the minds of their society in some way to prevent their own demise and ensure they werenāt overthrown, enslaved, or even just thought of as an evil person.
Many of my examples above like public libraries came about because gilded age titans didnāt want to die with the reputation of robber barons. Instead, they did something anti-profit and created institutions like libraries and museums to boost the reputation of their name.
Itās the same reason why your local university has family names on its buildings. The wealthiest people in society often want to leave a positive legacy where the alternative without philanthropy and, essentially, wealth redistribution, is that they are seen as horrible people or not remembered at all.
> This is garbage hypercapitalist/libertarian ideology.
Go on then, how do you decide what people deserve? How do you negotiate with others who disagree with you?
> examples above like public libraries
I agree! The nice part about all these mechanisms is that theyāre voluntary.
If youāre suggesting that The Engineerās actions should be constrained entirely by his own conscience and social pressure, then we agree. No laws or compulsion required.
These examples arenāt generally voluntary once implemented. I canāt get a refund from my public library or parks department if I decide not to use it.
The social pressure placed on The Engineer is the manifestation of law. Thatās all law is: a set of agreed-upon social contracts, enforced by various means.
Obviously, many dictators and governments get away with badly mistreating their subjects, and thatās unfortunate, shouldnāt happen, and shouldnāt be praised as a good system.
I think you may be splitting hairs a little bit here and trying really hard to manufactureā¦something.
Slavery was (is) also an agreed upon social contract, enforced by various means. What makes it wrong? You clearly have morally prescriptive beliefs. Why are you so sure that your moral prescriptions are the right ones? And that being in the majority gives you the right to impose your beliefs on others?
What if you are in the minority? Do you just accept the hypercapitalist dictates of the majority? Why not?
Law is more than convention. What distinguishes legitimate from illegitimate law?
The only way for people who disagree axiomatically to get along is to impose on each other minimally.
Who ever said you have the right to a decent a secure life? People donāt universally agree about this. Some of us posit that we will never escape a state of competition for fundamentally scarce resources. And that the organizing principle of a free society should be peaceful coexistence, not mandatory cooperation.
You figure out your own economic security, Iāll manage mine.
Oh my, please rant on. I'd love to hear more about people not having the right to a decent and secure life. (After all, I've often thought that having my life tracked and used my a corporation or government would be a wonderful utopia!)
It's already completely disconnected, don't worry about it. Most people who own any real estate earn more in price appreciation per year than they earn in take-home salary from their real full-time jobs.
Cool concept, but this isn't 1980. We've been sold these sorts of concepts for 40+ years now and things have only gotten worse.
We have a K shaped economy. Top earners take the majority. The top 20% make up 63% of all spending, and the top 10% accounted for more than 49%. The highest on record. Businesses adapt to reality and target the best market, in this case the top 10 to 20%, and the rest just get ignored, like in many countries around the world.
All that unlocked money? In a K shaped economy it mostly goes to those at the top, who look to new places to park/invest it, raising housing prices, moving the squeeze of excess capital looking for gains to places like nursing homes and veterinary offices. That doesn't result in prices going down, but in them going up.
The benefit to the average American will be more capital in the top earners' hands looking for more ways to do VC style squeezes in markets previously not as ruthless but worth moving to now as there are less and less 'untapped' areas to squeeze (because the top 10-20% need more places to park more capital). The US now has more VC funds than McDonalds.
Irrelevant aside: But I hold grudge against the economists who picked the letter K to represent increased inequality. They missed the perfect opportunity to use the less-then inequality symbol (<) and call it a āless-then economyā.
to the point of where the cost of bringing the goods to market or its opportunity cost exceed the price the market will bear. Its why people living in areas of material poverty don't just get everything on discount.
I also notice that in the very first graph bank teller jobs were growing rapidly until ATMs started to be deployed, and then switched to growing very slowly. That sure suggests to me that if ATMs didn't exist bank teller growth would have continued at a faster pace than it actually did.
I don't understand the economics behind bank branches. Some of the best real estate by me is taken up by giant bank branches that are always mostly empty with a few bored employees inside. And they open new ones all the time. So it's not like they're stuck in some lease.
But when those employees are meeting with clients, they create money out of thin air by making loans, which then is used to pay for goods and services such as leases.
Right. What banks do is sell loans. That's the profit center. Teller windows, vaults, and cash handling are all low or no revenue cost items.
So newer bank branches look like car dealership offices. There are many little glass rooms where you sit down with a bank employee and discuss loans and other financial products. That's where the money is made.
There's a small area in back with traditional tellers. It's not where the money is made.
I don't think it will, but I also think it's not all doom and gloom.
I think it would be a mistake to look at this solely through the lens of history. Yes, the historical record is unbroken, but if you compare the broad characteristics of the new jobs created to the old jobs displaced by technology, they are the same every time: they required higher-level (a) cognitive (b) technical or (c) social skills.
That's it. There is no other dimension to upskill along.
And LLMs are good at all three, probably better than most people already by many metrics. (Yes even social; their infinite patience is the ultimate advantage. Prompt injection is an unsolved hurdle though, so some relief there.)
Plus AI is improving extremely rapidly. Which means it is probably advancing faster than most people can upskill.
An increasingly accepted premise is that AI can displace junior employees but will need senior employees to steer it. Consider the ratio of junior to senior employees, and how long it takes for the former to grow into the latter. That is the volume of displacement and timeframe we're looking at.
Never in history have we had a technology that was so versatile and rapidly advancing that it could displace a large portion of existing jobs, as well as many new jobs that would be created.
However, what few people are talking about is the disintermediating effect of AI on the power of capital. If individuals can now do the work of entire teams, companies don't need many of them. But by the same token(s) (heheh) individuals don't need money, and hence companies, to start something and keep it going either! I think that gives the bottom side of the K-shaped economy a fighting chance to equalize.
Does it? The Communist Manifesto famously hypothesized that those who have the replicators, so to speak, will not allow society to freely use them.
The future is anyone's guess, but it is certain that 100% of your needs being able to be met theoretically is not equivalent to actually having 100% of your needs met.
Why is that the endgame with people though? Maybe I'm just jaded but several different human nature elements came to mind when I read your comment:
Greed/Change Avoidance:
If someone invented replicators right now, even if they gave it completely away to the world, what would happen? I can't imagine the finance and military grind just coming to an end to make sure everyone has a working replicator and enough power to run it so nobody has to work anymore. Who gives up their slice of society to make that change and who risks losing their social status? This is like openai pretending "your investment should be considered a gift because money will have no value soon". That mask came off really quickly.
Status/Hate:
There are huge swaths of the US population that would detest the idea that people they see as "below" them don't have to work. I can imagine political movements doing well on the back of "don't let the lazy outgroup ruin society by having replicators".
Fuck the Poor:
We don't do the easy things to eliminate or reduce suffering now, even when it has real world positive effects. Malaria, tuberculosis, even boring old hunger are rampant and causing horrible, unnecessary suffering all over the world.
Dont tread on me:
I shudder when I think of the damage someone could do with a chip on their shoulder and a replicator.
The road to hell is paved with good intentions:
What happens when everyone can try their own version of bio engineering or climate engineering or building a nuclear power plant or anything else. Invasive species are a problem now and I worry already when companies like Google decide to just release bioengineered mosquitos and see what happens. I -really- worry when the average person decides a big complicated problem is actually really simple and they can just replicate their particular idea and see what happens. Whoops, ivermectin in the water supply didn't cure autism!
Someone give me some hope for a more positive version here because I bummed myself out.
I mean, if I could live at my current level (middle class) without working, I would gladly do so, and let others also live at the same level, anywhere in the world, freely (if it was in my power). I do give to charity, always have, but, the crazier things get, the less secure I feel in giving $$ away.
Even replicators need feedstock - people who own the rocks or sand or whatever feeds them will start charging an arm and a leg. Sure, I could feed it dirt and rocks from my own property, but only for so long before I'm undermining the foundation of my own house. To say nothing of people who live in apartments.
And then, if everyone has equal $$, how do you decide who gets to live in the better locations / nicer housing?
We have to grow out of those kind of dreams. That's like a kid dreaming that when he grows up he'll eat ice cream for dinner every day.
People when they mature have an innate desire to work. It is good for body and mind. If you're curious about the world, you'll have to do some work one way or another to achieve your goals and satisfy your curiosity.
If "society" is just a function of basic needs, then there's plenty of places in the world to visit where people live like that and use any excess energy in endless fighting against each other instead of work.
I mean... Maybe the things I'd LIKE to work on are getting my car around the race track faster. Very few people will pay me for that - especially if I'm not a very good driver. But I enjoy it immensely. I'd MUCH rather do that than work.
And right now, due to having to work, maintenance on my house is a bit behind.. Would also prefer to catch up on that - but again, no one is paying me to do that.
More like something closer to 100%. The ATM was notable for enabling a complete change in mission. The historical job of teller largely disappeared, but a brand new job never done before was created in its wake. That is why there was little change in the number of people employed.
> because of deregulation and a booming economy and whatever else.
The deregulation largely happened in the 1970s, while you're talking about 1988 onward. The reality is that ATM actually was the primary catalyst for the specific branch expansion you are talking about. Like above, the ATM made the job of teller redundant, but it introduced a brand new job. A job that was most effective when the workers were closer to the customer, hence why workers were relocated.
> So, ATMs did impact bank teller jobs by a significant amount. A third of them were made redundant. It's just that the decrease at individual bank branches was offset by the increase in the total number of branches, because of deregulation and a booming economy and whatever else.
There's an important point here that you're glossing over. The increase in the total number of branches doesn't have to be unrelated to the decrease in the number of tellers each branch requires to operate. The sharp drop in the cost of operating one branch directly means that you can have more branches. This means it isn't true that "a third of bank tellers were made redundant" - some of them were reallocated from existing branches to new ones.
First: Most people believe it was Netflix that killed Blockbuster, but that's not strictly correct. It was the combination of Netflix and Redbox that really sealed the deal for Blockbuster (and video rental generally). It normally takes not one, but at least two things to really fill the full functionality of a old paradigm. Also it's human nature to focus heavily on one thing (Blockbuster was aware of Netflix) but lose sight of getting flanked by something else.
Second: Not listed here is how banks themselves have changed to be almost entirely online, which in many cases is more of a outsourcing play than a labor destruction play. My favorite example of this is Capital One, where the vast majority of their credit card operations literally cannot be solved in a branch. You must call them to say, resolve a fraud dispute. Note that this still requires staffing and is (not yet) fully automated, just not branch staffing. It doesn't make sense to staff branches to do that.
I do not get what's special about banking apps as opposed to online banking. I've been doing online banking in the browser on a PC since before apps and I'm still doing it because dealing with data on a phone is painful compared to a PC.
I know this is true, but for serious tasks, I need the screen real estate. I'm amazed at what some people can do from a phone, but also wonder if they're missing things, of if it's actually inefficient.
I'm going to bet that you are a millennial or older? We need our big screens for $IMPORTANT work (buying big things, money stuff, etc.). GenZ tends to be less bothered by it and just does it all on the tiny screen in their pocket. It's time to schedule a colonoscopy.
Just like with a lot of things. Sure you could do a thing better, faster, more efficiently on a PC, but some people just don't care when 80% is good enough.
My boomer dad does more things on his phone than I do and I'm Gen X. It's actually astonishing how much he does on his iPhone. I'm dragging out the laptop and he's on his iPhone happy as a clam.
I've heard that GenX/Millenials are in a sort of PC goldilocks zone. People older than that cohort don't know computers and therefore use phones for everything, people younger don't know computers and also use phones for everything.
I'm a tech loving boomer, I always use my PC for banking, ordering, etc. My wife, however, almost always uses her cell, which is great for when we are traveling. Even though we're only five years apart in age, she's lite years ahead of me with a cell. I freely admit part of my reluctance for using my cell is the mobile tracking ability of companies.
I used to be with āitā, but then they changed what āitā was. Now what Iām with isnāt āitā anymore and whatās āitā seems weird and scary. Itāll happen to you!
that's kind of an ad hominem, but also beside the point: most bank apps (and websites) are actually absolute garbage, especially the top ones, just one example: the Citi app (on different phones) for a very long time refused to allow me to make a payment or change my password, so i had no choice but to use desktop. Somehow still, top banks' ugly websites seem to allow more functionality/fewer bugs than their mobile apps, which are very often just dumbed-down webviews or simplifications of their websites.
I log in to transfer money, to take a photo of a check to deposit it, to check my balance.
All of that is fine on a phone screen. Actually, it's a lot easier to take the check photo.
And a banking app is a whole lot more secure than a browser tab running extensions that might get hijacked, on a desktop OS whose architecture allows this like widespread disk access, keyloggers, etc.
I am going to guess you are 30 or older. Google image search "laptop tasks millennial" to see that this is a feeling shared among our cohort but not the younger cohort.
If you consider a website fully laden with ads as working. I have yet to find an ad blocker that works on my iOS/iPad OS that works as well as on my computer. I also hate apps with all of their invasive data hoarding that is much more controllable on my computer. So to me, websites on mobile are broken as they are full of malware vectors that are not present when looking at the same website on my non-mobile device. For me, website === desktop only
That wasn't true before smartphones, everyone had a computer so they could access the Internet. Except maybe in developing countries - but the article is about the US.
At one point, humans had not stepped on the moon. At one point, we didn't know about antibiotics. At one point....
It doesn't matter what used to be, we're discussing what is now. We now have mobile devices that are much cheaper for people to obtain than a computer. For most, that device is more powerful than a computer they could afford. Arguing the fact that a vast number of people's only compute device is their mobile is just arguing with a fence post. It serves no purpose.
My main reason to go to bank after online was to deal with physical things. Mainly checks and specifically depositing them. Now, I can usually do that with my phone because of the camera. Even if I had a webcam before, I donāt recall the functionality being there. They had check scanners but usually for businesses and my check volume is really low so never made sense to get one (usually came with a monthly fee to have one iirc)
Even now, the mobile deposit limit seems sufficiently low that I still go to the bank with more frequency than Iād like. Luckily, the ATM at the bank has a check scanner now that doesnāt have a limit so thatās usually easier and faster. Itās the daily $5000 limit I hit the most, a single check and put me over it and require a trip to bank. I think the monthly limit is $30000 and that doesnāt get in my way often. I think $5000 is too low of a daily limit. Itās common enough that I have to make a $5k+ settlement with friends/family that usually always has to be done by check. (For curious, This is usually travel that I pay for and we settle up later.)
Less common, but sometimes I need to get a bank check (guaranteed funds) or a money order. Way less frequent is need to get/give cash funds. Usually can use ATM for this unless itās a larger withdrawal or if I need some particular denomination. This whole paragraph accounts for about 1-4 annual trips in any given year though.
My bank decided that the online banking website needed to be more like the app, so now they are both terrible. Basically the entire site is white space on the computer, because everything is centred and dumb down. Input fields for numbers are invisible, they are just a label saying "Kr" and you're suppose to click it and the numerical keyboard on the phone pops up, except it obviously doesn't on the computer.
Paying billed is easier on the phone in the sense that bills in Denmark have a three part number, e.g. +71 1234567890 1234678 where the first is a type number, second is the receiver and the last is a customer number with the receiver. The phone allows to just use the camera to scan the number.
Transferring money is terrible on both platforms, because it's designed to be doable on the phone, meaning having three or four screen, but it gives you no overview. There's plenty of space on a computer for a proper overview giving you the feeling of safety, but it's not used. Same for account overview. Designed to the phone, but doesn't adapt to the bigger screen and provide you with more details, so you need to click every single expense to see what is is exactly.
I've had the same thing happen. Huge buttons, a lot of whitespace, little functionality in the default web version. To deal with stocks and such, the old version is still available somewhere.
Official banking apps are harder to phish than websites. They also tend to keep you signed in for longer, especially once you enable something like FaceID.
Obviously, I've never used every. single. banking app, yet the ones I've used have signed me out of the app just as the web page does. Using FaceID makes it less noticeable, but it is signing me in each time I use it unless I've returned to it within the active session. Otherwise, it's logged out as expected.
Yes, the apps perform better/faster and generally have more UI thought put into them. Overall, lower friction. Often when people need to use their banking app, they're in a hurry, maybe stressed (e.g. in line at a grocery store) so everything the bank can do quickly and with visual assurance helps.
On the premium end of banking, where users generally aren't stressed about money, offering an app is more about catering to however the user prefers to interact.
Drive to the bank, wait in line, talk to someone who misunderstands me, fill out a deposit/withdrawal slip, and also if itās not 9AM - 5PM I just canāt do this at all.
> I do not get what's special about banking apps as opposed to online banking.
I use both. In the beginning I used to prefer the web version. I can use my large monitor to see more data and use a full keyboard and mouse. But I have started to use the mobile version more. For Wells Fargo at least, the mobile version is faster to log into because of face ID support. The website requires a lot more clicks and keystrokes. Also, the mobile app makes it easy and possible to deposit checks if and when I get them.
I've never written a check, but I have had to deposit occasional checks. In the last 6 years the only checks I've received were first paychecks at a new job (before direct deposit was set up) and my covid stimulus checks.
I'm in Europe where the situation is different: checks haven't been used in appreciable numbers for 30 years or so. It's all online or paper transfer orders. If you get a pre-filled paper transfer order, you can type (or scan and OCR I suppose) the same data into the online form.
Europe is a big place, but my understanding is that the US is the outlier here and Europe is relatively similar in this regard.
The only time I really saw checks used was when I was a child ~30-35 years ago and my parents used them. I did once cash a check from an elderly relative, but that was very unusual and only happened once. I didn't even know it was still possible to do that, my reaction was more like if someone had handed me a stack of punch cards to run on my computer.
There hasn't been anything an average person used checks for in the last decades in Germany. Except a few elderly people, nobody uses checks and there are no rebates via checks at all.
Cash is still fairly common, and manufacturer rebates are basically not a thing. If they were, you'd send them an account number (IBAN = bank ID + account number at bank) to transfer the money to.
In fairness, manufacturer rebates have pretty much (mercifully) disappeared in the US as well as they were basically a scheme to mentally make you account for a lower price you wouldn't end up being rebated for various reasons.
I am in the UK and I have received two cheques in the last year, both for small amounts.
As it turned out, my bank rejected both because they were made out to [middle name] [surname] rather than [firstname] [surname]. Ironically the former is unique (probably) whereas they had another customer with the latter.
What's a check? As the saying goes, 'I'm too European for this'.
On a more serious note, the last time I saw a cheque in the UK was my grandfather balancing his cheque book in the mid 80s. It really has been that long since they were in general use in the UK, at least.
Just like with the prevalance of Apple/iPhones, the US banking system is global outlier.
Things you can't do with my banking app you can do with the web site:
- Extract your transactions to excel/csv
- Use OpenBanking
- See all my accounts on screen at once
- Sharedealing
- International transfers
But people are right, banks trust the mobile app more, and realy on it as an MFA device, so even if you use the website you still need the app.
One bank I work with seems to have all but given up on online banking and I just have to use their app because online banking will no longer work on Linux (although they don't openly admit it).
I think Android and iOS are safer platforms than PCs and that's why banks want you to use your phone.
> I think Android and iOS are safer platforms than PCs and that's why banks want you to use your phone.
This statement fills me with revulsion and rage lol. The only real "safety" involved here is the removal of user agency. I have a lot more trust in a machine I can actually control, secure, and monitor than the black box walled-garden of phoneland.
Your bank's insurer trusts Google's security more than yours, and they must surely (and rightfully) believe that while Google would spy on you, they wouldn't steal your bank account.
I've had the same thought. The only major difference that I can think of is the built-in camera making check deposits easier. It may also be that people were just generally using computers more and using the internet more over this same time period, although a lot is that because of smartphones
Yeah, I have been doing online banking since around 1998.
I have refused to install the bank app on my phone because I see no point in it and just downsides in case I get mugged (bad experience in my teenage years)
The 1 check I get a year takes about a minute to deposit at the ATM on my way to work.
Generally yes the apps tend to be easier to use for most things, especially with a high-speed internet connection. Customers prefer them, banks build them since customers prefer them.
My PC has had a scanner connected to it for over 20 years, and in the mid 00s I was scanning and depositing checks through my bank's website (USAA). Even with modern cameras and fancy smarphone software, the results you get from a PC scan are still much better than taking a picture with your phone.
If you don't have a scanner, nearly all laptops have a webcam built in, and many people have one for their desktop as well.
On top of all that, there's no reason you can't use your smartphone camera to upload an image into a website through the mobile browser. I've done it many times for things. Just this morning I "scanned" a receipt into Ramp by taking a picture with my smartphone in the mobile browser.
You can't invade the user's privacy nearly as well in a browser (which is great for analytics/marketing), so there's a lot of incentive to the app creator to force a mobile app. But I think we should be honest that it's not for the user, it's for the company.
> My PC has had a scanner connected to it for over 20 years
You're basically the only person in America doing this. Tens of millions of folks are just scanning it with the app on their phone and it's objectively a much better experience lol. The resolution of the photo taken on your smartphone is beyond good enough, there's no need to over-engineer something here.
> You can't invade the user's privacy nearly as well in a browser (which is great for analytics/marketing), so there's a lot of incentive to the app creator to force a mobile app. But I think we should be honest that it's not for the user, it's for the company.
I agree with your first sentence, but not your second one.
Banking applications can certainly get more/different data on you from using the app, but the job of the bank is to protect money and to know their customer. Privacy is secondary, of course outside of things like other people knowing your account balance, unauthorized access, &c. That's for the bank, because they don't want to lose your money, but it's also for you because you don't want other people getting access to your money.
Make that two people. I much prefer to slap the rare check on the scanner than fiddle with the phone. My banks "scan the check" part of the app was buggy for a long time, so maybe that jaded me. (~"move closer", ~"move away", ~"increase lighting"...)
> the results you get from a PC scan are still much better than taking a picture with your phone.
The quality of the check images is not as big of a deal as you might think. No one is actually inspecting these unless the amount of deposit is near a limit or the account is flagged for suspicious activity. You definitely do not want to throw away the physical copy until the bank confirms the deposit.
Yes I totally agree. Mainly I threw that in there to pre-empt any "quality" argument that someone might try to use for why native mobile app is needed.
Is it? I lived in the US for 20+ years until 2021 and, though there were definitely more checks than I see in Europe now, the frequency with which I used them was approaching zero, which definitely wouldn't qualify as "stubbornly check-focused".
Both my housekeeper and contractor use checks and, while I could get the bank to "write" them checks, it's easier to just hand them a piece of paper. I've also needed to pay my neighbor something from time to time and it's easier to just write a check. I do also periodically receive checks from various institutions.
I guess to me there's just a big difference between what you're describing (which matches what I remember) and "stubbornly check-focused" as ancestor comment said.
I do find the money transfer options where I am in Europe much easier, though, and they do make checks and PayPal/Zelle/Venmo pretty obsolete too, IMO.
I think that's fair. I do carry a few checks in my travel folder but I don't think I've ever used them in Europe. Do carry some backup US cash.
But in the US, there's probably a general expectation that you can send or receive checks at least now and then. There are often other options but that's probably the lowest friction one even if my bank can send checks if needed, albeit with some delay.
I can do all the same things with my bank with a browser that I can via the app.
It seems like a natural evolution of the technology and adoption rates to me. There was rudimentary online banking in the 2000s, then we saw banks shift to fully online presences in the 2010s. Maybe it wasnāt āthe iphoneā but just the fact that by the 2010s, everybody had a device in their pocket.
Best way to get clicks without publishing something of substance is to publish something wrong. If the article was titled "The internet killed bank teller jobs", then people would think "duh" and no one would click on it.
I used to do banking on my (touch tone) phone before I did online banking. I still do online banking on my PC because my budget spreadsheet is on my PC, right next to my browser window.
Personally, I don't think this is about banking apps. I'm kinda surprised an article talking about ATMs and teller jobs barely mentions cash, checks & cards and doesn't mention paypal or venmo at all. I used ATMs less when it became less of a necessity to carry cash.
You don't use cash to buy things online. Even in person, outside of brick & mortars, paypal/venmo became in vogue at some point in the past. Those are banking apps in their own way.
Honestly, its overkill. When my MaBook went kaput, i had to start doing everything on my iPhone. Had to get a good mobile documents office suite (Collabora is great ), do all my banking with both mobile apps or desktop browser apps, etc. Its been dfine, i doubt i would use a full size computer for that anymore.
I'm always a bit confused in these discussions what is special about banking software of any kind at all. My bank has an app, but other than checking a balance every now and again, the only reason I use it is because it's also my insurance provider and I make claims through it. For actual banking, I don't really do any, through the website or the app. My pay is direct deposit. My purchases are on credit with payment details generally stored with the vendor; otherwise, I have cards or use the numbers. Monthly balance payoff is autopay. I had to go into the website once to set all that up however many years ago I don't remember, but people talk in these threads like they're in their banking apps directly moving money around all the time, actually making payments with the app. Why?
I have a personal current account, a shared current account with my wife, and several savings accounts. It is frequently necessary to move money between these accounts.
Also, here in the UK we don't really use Venmo or anything like that, so normally transferring cash to and from friends and family happens by bank transfer as well.
Doing it on the go via the app is much easier than using the web app through the main OS browser just because the UI is optimized. not a problem with using the web app approach, just that there isnt as much investment in it due to zeitgeist i guess.
Also since you are already using 2FA, you are already on the phone so might as well do basic operations there.
I can also look at transactions in my bed before going to bed so that is nice.
If I need to look at a support ticket or look at transactions more deeply, i still use the desktop approach.
I don't think many people would argue that there shouldn't be a mobile app, just that there should also be a website/webapp way to do it as well if you don't want to install their native app.
Right, I'm going out of my way to avoid inviting Google/Apple and their respective app store surveillance ecosystems into my transactions. I don't even have banking apps installed. I don't understand why so many people are prostrating themselves to this future for minor convenience.
Mobile payments (at least in places where they are executed correctly) are certainly a huge improvement over physically exchanging cash and change. I haven't needed to take out my wallet for years.
You just need to understand how things are now. Here are few modern smartphone conventions that render banking on an old-fashioned PC totally obsolete:
- Remembering that you need to do banking, but waiting to do it until you're at home in front of your computer. This is impossible now, and if I don't follow the impulse the moment it occurs, the impulse will forever escape into the ether.
- Even the mere mention of needing to observe a URL is often far too scary. Typing one in, or using a browser bookmark is of course, impossible.
- Using a keyboard and mouse. It's just too onerous to use tools that are efficient and accurate. Modern users would much rather try to build a mental map of the curvature of their thumb, so that when they touch their touchscreen and obscure the button they're hitting, they they can reference that 3D mental map to guess at what portion of the screen they've actually pressed. Getting this wrong 30% of the time does not detract from the allure of touch screens.
- Using a normal-sized screen that allows you to actually see a lot of data at once, or even use multiple tabs. Again, this is really unthinkable. Of course it be be completely unacceptable to need to wait to do your banking until you're in front of a computer. It's 2026, and I cannot be bothered to remember to do a task later. But, in needing to always follow every impulse immediately, it doesn't matter that my phone screen only displays a small amount of information at once, or that tabbed browsing is impossible in a banking app. Those inconveniences are acceptable, or even welcome!
I'm based in the rich Western world. Whenever I travel elsewhere, I'm amazed by the cheapness of labor.
Humans would attend a gas station or fetch items in a store. Why? They're completely unneeded, I can do (and WANT to do) that myself.
I always feel sad about these people, trapped in an economic system that forces them into useless labour when they could spend their time learning actually useful skills.
It's weird how you both describe visiting other cultures AND thinking everybody's just like you in the same paragraph.
1. You can fill your own car with gas, but some people can't, or prefer someone more knowledgeable to do it for them. Some people like the comfort of having someone bag their groceries for them, or have disabilities that necessitate it. Some people are old. Today you learned.
2. Your economic system is not different than theirs. Everybody NEEDS a job to support themselves, their families and to be functioning members of society. That means jobs that can easily be automated won't be automated. Also, you may make a lot more money than that kid bagging groceries to make a few bucks for himself, but at least what he does actually helps someone. What we here on Hacker News do is mostly build imaginary products that will be gone and forgotten quicker than you can say "Al Bundy".
3. Not only that, all of us here have basically written our own replacements and made ourselves obsolete. Something tells me your job isn't really needed too.
That labor cheapness is enabled by a cheapness of cost of living. Those things all tend to feed onto each other.
> I always feel sad about these people, trapped in an economic system that forces them into useless labour when they could spend their time learning actually useful skills.
It's useful labor. Yes you could do it yourself, but it gives them a job which they can ultimately use to afford food and where they live.
I mostly only feel bad for kids doing that sort of labor as it means they aren't getting an education. But for an adult? It speaks to something a bit right about their economic situation that they can stay a float by merely fetching items in a store.
I wish in the US that it was possible for someone to make a living doing doordash or instacart.
Some countries prioritize having low unemployment numbers, because they believe that unemployment leads to unrest. Governments can choose to subsidize the cost of labor to achieve this.
Also I think it is preposterous to claim that these people are trapped.
Because the presence of a human likely prevents shoplifting and / or vandalism. It must make economic sense for the gas station owner to employ a human, and I suppose this is the sense.
What actual useful skill do you think the gas station keeper could learn? Is their employment the thing that prevents them from learning these skills?
> What actual useful skill do you think the gas station keeper could learn?
I mean, it's possible there are useful skills they could learn but there's not the interest or desire to learn those skills. It's completely possible that person is perfectly content doing that work.
It is a different mindset and they are happy with what they are doing. I come from India where there is a ton of that labor. When I lived there, I had a couple of full time house help, supplemented by cook etc as needed. They had plenty of time by themselves. They would genuinely just zone out when they had free time, even significantly long. THey liked the easiness of the job, and the fact that once it is over, it is just over. No need to think about tomorrow, take your work in your head etc. A lot of the world's people are like that, maybe even a significant majority.
I am sure in the rich Western world you also have people who work at a gas station, who fetch items from a store.
Helping someone fill their car with gas or sell them an item is useful as well, not everyone should be a software developer. Before feeling sad for other people, think about yourself as well.
If it makes you feel better, most labor is useless. In the sense that a computer program and/or machine could easily do it, or the customer could trivially do it themselves. But the labor is cheap enough that having a warm body around is worth it.
We've pretty much locked ourselves into an economic system that requires everyone to work, even though our productivity has skyrocketed many orders of magnitude. The end result is most people are doing meaningless work just because they have to in order to survive, and most jobs do not need to exist. This is true even in office work. It usually manifests as moving stuff from A to B and then maybe back to A. Basically, not creating, just moving. And not physically moving either.
There are ATMs not attached to bank branches. They could have replaced the branches with ATMs before. (I do wonder what bank tellers are doing these days. I mean actual tellers, not investment advisors and jobs like that.)
I still need to talk to a real bank teller before withdrawing $10,000 in cash. Above a certain amount my bank requires an ID in addition to a debit card and a PIN.
Had go to go a branch a couple times in the last year at a local credit union. Largely seems like tellers are getting busy work. There are not a lot of tellers present, and they appear to be doing other things on their workstation. So they get up to go to the teller window and help me out with my request, which usually involves them playing around with some archaic bank app on the teller machine and fiddling with the copier for a bit. A supervisor is always around who knows more of the business use cases and always seems to get involved either out of boredom or because they're the only ones who know how to do something.
They are handling in-person transactions, usually deposits (many who deposit checks manually still don't know how to use the app to do so, or if the branch has an ATM that does deposits).
They are the only way to get non-20 cash in many areas; the ATMs that can dispense other bills are quite rare. And if you want $100 in ones you're going inside.
They're basically bank receptionists for old people who will type details into the same system that the general public has access to. They also handle cash for small businesses (I worked in a cafe during university and we'd regularly have to do runs into town to deposit rolls of bills and get more change to float the till)
If that's all you think tellers are then you're missing out on a lot of opportunities.
They are the first line of human-to-human contact with customers. They are able to sell new services or upsell existing services to customers, especially with the customer's data right in front of them. A new pleasant conversation plus "Oh by the way, did you know that you could get service ABC that would help you?" is something that an LLM or ATM can't do reliably.
There's a tremendous amount of opportunity available with well-trained tellers.
I don't feel the phone conclusion is quite correct, because it's not just the need to use an ATM that has dropped. The need to use a banking app or website has also dropped.
The behavior of companies has changed dramatically. Checks have almost vanished, you can often set up automatic payments, and you can get bank balance notification emails/messages. A large portion of banking interactions are fully automated.
In recent years I have been going less and less to banks. 20 years ago I would go monthly to pay some bills.
Nowadays, I must visit a bank once or twice a year tops. My manager frequently sends me messages, but invariably he is trying to sell me something.
I've noticed that branches have really cut down on tellers and in my latest visit the branch didn't even have a teller, just someone helping people use the ATM and lots of desks (most were empty) for you to handle more complicated business with your account manager.
That paired with an increasingly cashless society. (Which is also in large part to smart phones) Otherwise you'd still need more tellers to conduct transactions that exceed ATM limits.
As far as I can tell, it's entirely that. The things the author cites as how mobile banking supplanted going to the bank (paying for things with debit cards, getting your paycheck direct deposited, etc) have nothing to do with mobile banking. They are all just as you said: we live in an increasingly cashless society, the only reason to go to the branch is to deposit or withdraw money, so the need for tellers has gone off a cliff.
Yes, exactly my reaction. Other than maybe to open an account in the first place, the only reason I ever went into to a bank even in the pre-internet, pre-smartphone era was to deal with cash.
Checks could be deposited in the deposit drop, or later at an ATM. My payroll went to direct deposit as soon as that was possible.
But to get cash, before ATMs, you went into the bank, unless you had check-cashing privilges somewhere else (supermarkets used to offer this). To deposit cash, you went into the bank so the teller could count it in front of you and agree on the amount. It was risker to deposit cash in a deposit drop or ATM.
The move to cashless transactions for almost everything, and the resultant rare need to carry cash, is IMO the main reason why we don't need very many bank tellers anymore.
Something that only came with the banking apps was opening of accounts via camera based identification and other security critical stuff, like 2fa for transfers, resetting card pins and setting other security features.
It's also easier to scan payments via app than go to the bank, something that is only possible via native like apps
What is a Bank nowadays. It is nothing. It is a virtual construct and software that we are supposed to put our trust into, where banks have a history of betraying that trust.
I didn't notice any link with the iPhone, except maybe a vague coincidence in timing. Online banking existed before the iPhone, it worked using websites, on personal computers. And it took some time before smartphones were taken seriously by banks.
What I noticed however is a noticeable decrease in service quality in bank branches while online (desktop browser) options became better. Banks pushed customers out of their branches progressively. In the early 2010s tellers couldn't do anything you couldn't do online by yourself. For services like dealing with large quantities of cash, or coins, they made it so that you couldn't do more than what the ATMs allowed you to do, limiting the amount of cash the branch had access to and increasing how much you could withdrew from ATMs.
They didn't get the idea to fire all their tellers when Steve Jobs announced the iPhone. It was a decision at least a decade in the making. It is just that people tend to resist change so it happens slowly, especially for big, serious business like banking. And I don't think it is a bad thing.
When ATMs first came out, they were mostly still only at the branch because they were big machines. I remember in the late 70s/early 80s, if you got a steady check (like social security or a paycheck from a steady job) you could cash them at the liquor store. The liquor store would even run my Dad a tab, and he would pay it off when he cashed the check. On paydays he would not be the only one doing that, they must have had to get a lot of cash on hand.
Eh, bank teller jobs were dying and on their way out long before the iPhone showed up. Back in the early 00s local branches were downsizing left and right. My small rural town went from having three banks with like 4 tellers in each bank, in the mid 90s, to one bank with 1-2 tellers, in the mid 00s.
By the end that bank only dealt with mortgages, other loans, and saving accounts.
Online banking and the rise of card use was a huge reason for that. It is almost 20 years since I last time went to a physical bank to withdraw or deposit money, or pay a bill. Probably even longer for paying bills.
Fun story. There are still bank tellers in the Falkland Islands because there is no e-banking. Transfers are literally made by filling in a piece of paper and taking it to the bank.
Starting with quotes with JD Vance and talking about listening to him on Joe Rogen is... a choice. Also I fail to see how the iPhone did anything or is relevant at all. Banking apps were made by third parties years after the iPhone came out and everybody had dozens of smart phones to choose from. The reason why they mentioned the iPhone specifically, touch screen and app store, already existed in the form of PDAs long before the iPhone came out.
If I have to physically still go to the bank, it really hasn't disrupted much. The iPhone created an opportunity... the banks investing around the technology is the disruption. ATM itself couldn't unlock as much which I suppose is the paradigm mentioned in the article.
I hate the graph here. "Bank teller employment has fallen off a cliff" - well it _looks_ that way but actually it's more like halved from its peak because the bottom of the Y axis isn't zero. That's still a significant reduction, but it's not as dramatic as it seems at first glance.
I was born in the mid-80s and I've never had a bank teller experience. For me growing up, the bank teller was simply the tech support person for my debit card.
This writing style where every section has multiple paragraphs of preamble, prolepsis, cold openers for cold openers, and tangents is infuriating. Get on to the point already.
I guess the trope in movies of masked bank robbers going in and threatening a scared bank teller will be a thing of the past soon. Pointing a gun at an iPhone doesn't have the same vibe.
Blog says: ATM didn't kill jobs. Okay, it did kill some jobs. Proportionally did, but lots of new banks means overall more jobs. (The relationship management stuff is kind of irrelevant, it was simply the banks took the efficiencies to expand, thus still less tellers per branch, but more tellers overall.) /Completely different technology that didn't have the physical space limitations of ATMs/ then caused branches to decline and then the actual teller decline was felt.
Pretty funny how this is being twisted into what feels like AI booster shillery. Smart people are talking about AI as being similar to ATMs (I prefer the analogy of a spelling and grammar checker in a word processor) or other marginal increasers in human productivity/efficiency. They absolutely will increase productivity. They mean less people can do more. But the the roles don't go away completely because they have clear technological limitations. They spout probably likely text, and straight up lie, and you can't trust 'em. That's a limitation in what they are just like an ATM needs to be in a big metal box and they only dispense cash.
AI can't do the automated firm linked to (to be fair, didn't read that linked substack, as it looked as ridiculous as that other sci-fi fanfic by Citroni Research or whatever it was). Not AI as it is now known, namely an LLM chatbot. /A completely different technology/ might. A technology that might be informed by AI. Sure. Just like I'm sure mobile banking was informed by the technology in ATMs. But we're not calling smartphones with mobile banking apps "mobile ATMs". Because if we were, then you could get away with it. And the future technology that could remove "labor shaped holes" (or however the author phrased it) could be twisted into an AI nomenclature. Just like Machine Learning (ML) got twisted into AI nomenclature. But the iPhone probably didn't need the ATM to come first. It needed things the ATM uses. The next thing could very well use ML. But not enough to be called "AI" except to boosters shills.
Overall, this sounds like the usual AI boosterism that Ed Zitron complains about often. And I agree with his critiques. This article says nothing about how a /new/ technology needs to come about from AI. If it did, it would also have to comment on whether we need to spend insane amounts on data centers and circular deals to get to it. Because my guess is the answer is, no, it takes R&D and a truthful "we don't know what it looks like yet and we can't promise you shareholders when it will come" to get to it.
Ironically the author says the ATM story was used to come up with two incorrect interpretations, and then provides what I feel like was another. Still interesting, if possibly irresponsible in how it frames AI as iPhone--and not the ATM it still feels like. [EDIT: a word.]
I really enjoyed this article, I didn't bridge the idea of an ATM and mobile banking.
I think the idea raised about "Automated Firms" is a bit off in the picture painted in that linked article. I think the David Oks intention is to paint a picture of a fully automated company, but the linked article gives this impression:
> Future AI firms wonāt be constrained by what's scarce or abundant in human skill distributions ā they can optimize for whatever abilities are most valuable. Want Jeff Dean-level engineering talent? Cool: once youāve got one, the marginal copy costs pennies. Need a thousand world-class researchers? Just spin them up. The limiting factor isn't finding or training rare talent ā it's just compute.
In that above paragraph the author is saying to the reader that a human will be able to spin up and get these armies of intelligent workers, but at the end of the day their output is given to a human who presumably needs to take ownership of the result. Intelligent workers make bad choices or bad bets, but those AI machines cannot "own" an outcome. The responsibility must fall on a person.
To this end, I think the fully autonomous firm is kind of a fallacy. There needs to be someone who can be sued if anything goes wrong. You're not suing the AI.
That is why a fully automated firm would be a paradigm shift. Instead of requiring someone to be responsible and to QA things, you just let AI systems be responsible internally, and the company responsible as a whole for legal concerns.
This idea of an automated firm relies on the premise that AI will become more capable and reliable than people.
In this regard, the company cannot be created where there is not a single person tied to it, at least legally, even shell corporations have a person on the record as being responsible. So there needs to be some human that is apart of it, and in any "normal" organization if there is a person tied to the outcome of the company they presumably care about it and if the AI 99.99% of the time does good work, but still can make mistakes, a person will still be checking off on all its work. Which leads to a system of people reviewing and signing off on work, not exactly a fully autonomous firm.
Also, employing āinfinite intelligenceā by splitting it into āworkersā and organizing them into firms cannot be farther than a paradigm change.
Itās strictly an attempt to shoehorn the new tech into an existing paradigm, just because right now the system prompt makes an āagentā behave differently than the one with a different prompt.
Yeah, I think if there is some sort of super intelligence, the idea would be that it would make the system of computers and computation irrelevant entirely. Now that would be novel.
There is no clear link to the iPhone causing lower teller employment.
This article does have a glaring omission: The 2008 financial crisis effects on the banking industry in general. When there are fewer local banks there are naturally fewer tellers employed. Bank failures peaked in 2010 in the aftershocks of the crises, which lines up nicely with the articles timeline.
yeah weird. Same goes for the "ATMs increased demand for tellers" strange idea suggested earlier in the article, which was automatically disproven right there by actually attributing the growth in tellers to deregulation. Which one is it?
This seems like a fluff piece. The tl;dr is that mobile banking (not the "iPhone") is what "killed" bank teller jobs. You can add online banking, credit cards, debit cards, and all other cashless payment options to that too.
There is also a premium for the human touch. I currently pay $15 fee to my bank a month. Going rate here for a bank account is $0.
But the $15 bank has a call center that is dreamy - reliably connected to a competent focused individual in under 3 seconds.
It doesn't matter how good the tech & automation is I place an economic value on that ability to pick up the phone and talk to a human. LLMs are crushing it but I'm not fuckin paying $15 for an LLM.
I didn't see the article mentioning how banks forced people to use ATMs or apps instead of tellers by having "green" accounts. where you would get a monthly account fee waved if you didn't go in to a branch.
Right around when my local credit union began requiring (IMHO insecure) 2FA, I coincidentally moved right next door to a branch location.
Since I refuse to implement their "security" "feature," I just walk into their office every time I need a simple balance inquiry/transfer. They probably hate that I have just enough money deposited to consider my inconveniencing them profitable.
Everyone I knew working as a bank teller quit because the actual job is screwing over old people with bad performing and long lasting investments.
My bank calls me at least once a year to tell me my personal bank teller changed again.
the line is being blurred as the need for tellers goes down many banks have the tellers performing personal banking adjacent tasks, like selling products, accounts or other upsells to existing customers
Based on the fact that we've had ATMs since the 1970s and bank tellers didn't fall away until the 2000s, the correlation isn't there regardless of the causation.
The interesting takeaway is that automation rarely removes jobs inside the existing paradigm.
ATMs automated a task inside branch banking, so banks just reorganised labour around it.
Smartphones removed the need for the branch entirely.
I mean, there is definitely a turndown period in labour force when a new tech is introduced, but it will defintely produce more jobs tho, as an evolution of human history. <3
Uhhh... if it's 'mobile banking' that killed teller jobs, what does the iPhone have to do with anything other than clickbait? (I guess I answered my own question)
This must be an amerilard phenomenon. Thereās no way the number of bank tellers has remained constant in the western world. I havenāt been to a bank branch in 10 years.
The graph showing that "Bank teller employment has fallen off a cliff" is not zero based. This is pretty damn bad. The graph looks like it's going down 90%, but it's actually going from 350k to 150k. That's a ~60% drop which is a lot, but not "falling off a cliff".
Probably a bigger sign to look for would be average age of bank tellers vs other occupations. If it's trending higher, then it's likely just people who've been doing the job for a long time and serving other older customers. I have a feeling not many young people are becoming tellers or even needing their services, but I can't verify it.
> an AI system is literally a machine that can think and do things itself
why do so many writers claim this as a matter of fact? are we losing (or did we never have) a shared definition of the word "think"? can an LLM, at this time, function with zero human input whatsoever?
edit to add: these are genuine questions, not meant to be rhetorical :)
it's hard for me to gauge a broader understanding of AI/LLMs since most of the conversations i experience around them are here, or in negative contexts with people i know. and i'll admit i'm one of those negative people, but my general aversion to AI mostly has to do with my own anxiety around my mental health and cognitive ability in a use-it-or-lose-it sense, along with a disdain for its use in traditionally-creative fields.
>are we losing (or did we never have) a shared definition of the word "think"
People have been saying, āthe computer is thinking,ā while webpages are loading or software is running for as long as Iāve been consciously aware. I agree thereās something new about describing AI as, āliterally a machine that can think,ā but language has always had fuzzy borders
It's wild to watch documentaries from the 1980s where a primitive computer is said to be "a thinking machine" that is "taking most of the work out of a job".
yeah, for sure. i really think some people are under the impression that LLMs are a form of general AI that actually processes thought rather than being an admittedly-impressive exponential autocomplete.
though i'm not by any means an AI booster, my question wasn't really meant to be taken as a gotcha - more a general taking stock of where we're at in terms of broader understanding of these technologies outside of the professional AI/hobbyist world.
One key line about ATMs is buried deep in the article:
> the number of tellers per branch fell by more than a third between 1988 and 2004, but the number of urban bank branches (also encouraged by a wave of bank deregulation allowing more branches) rose by more than 40 percent
So, ATMs did impact bank teller jobs by a significant amount. A third of them were made redundant. It's just that the decrease at individual bank branches was offset by the increase in the total number of branches, because of deregulation and a booming economy and whatever else.
A lot of AI predictions are based on the same premise. That AI will impact the economy in certain sectors, but the productivity gains will create new jobs and grow the size of the pie and we will all benefit.
But will it?
> But will it?
My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
For example, ATMs being automated did cause a negative drop in teller jobs, but fast money any time does increase the velocity of money in the economy. It decreases savings rate and encourages spending among the class of people whose money imparts the highest multiplier.
AI does not. All the spending on AI goes to a very small minority, who have a high savings rate. Junior employees that would have productively joined the labor force at good wages, must now compete to join the labor force at lower wages, depressing their purchasing power and reducing the flow of money.
Look at all the most used things for AI: cutting out menial decisions such as customer service. There are no "productivity" gains for the economy here. Each person in the US hired to do that job would spend their entire paycheck. Now instead, that money goes to a mega-corp and the savings is passed on to execs. The price of the service provided is not dropping (yet). Thus, no technology savings is occurring, either.
In my mind, the outcomes are:
* Lower quality services
* Higher savings rate
* K-shaped economy catering to the high earners
* Sticky prices
* Concentration of compute in AI companies
* Increased price of compute prevents new entrants from utilizing AI without paying rent-seekers, the AI companies
* Cycle continues all previous steps
We may reach a point where the only ones able to afford compute are AI companies and those that can pay AI companies. Where is the innovation then? It is a unique failure outcome I have yet to see anyone talk about, even though the supply and demand issues are present right now.
> My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable. The problem with services is that they're typically resistant to productivity growth, and that's finally changing.
If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam.
"Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable."
You've expressed very clearly what LLMs would have to do in order to be economically transformative.
"If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam."
It's not that process innovations are lacking, it's that product innovations are perceived as an indignity by most people. Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
> Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
Is the value in the outcome of receiving medical advice and care, and becoming educated, or is the value just in the co-opting of another human being's attention?
If the value is in the outcome, the means to achieving that aren't of much consequence.
More subtly, what is an education? What is care? As you point out, the LLMs are (or probably will become) perfectly good at the measurable parts of those services; but I think the residual edge of āgoodā education/care is more than just the other humanās co-opted attention.
How many of us have a reminiscence that starts ālooking back, the most life-changing part of my primary or secondary education was ________,ā where the blank is a person, not a curriculum module? How many doctors operate, at least in part, on hunchesāon totalities of perception-filtered-through-experience that they canāt fully put into words?
Iām reminded of the recent account of homebound elderly Japanese people relying on the Yakult delivery lady partly for tiny yoghurt drinks, but mainly for a glimmer of human contact [0]. Although I guess that cuts to your point: the value in that example really is just co-opting another humanās attention.
In most of these caring professions, some of the value is in the measurable outcome (bacterial infection? Antibiotic!), but different means really do create different collections of value that donāt fully overlap (fine, Iāll actually lay off the wine because the doctor put the fear of the lord in me).
I guess the optimistic case is, with the rote mechanical aspects automated away, maybe humans have more time to give each other the residual human elementā¦
[0] https://news.ycombinator.com/item?id=47287344
The supply/demand picture here is more complicated than it looks.
If AI displaces human educators, yes, their supply shrinks -- but we can't assume what direction its demand will go.
We've seen this pattern before: as recorded music became free, live performance got more expensive, and therefore much less accessible than it used to be.
What's likely to happen is that "worse" (read: AI) education will become much cheaper, while "better" (read: in-person) education that involves human connection-driven benefits will become much less accessible compared to what it is today.
Most people may be consider it a win. It's certainly not a world I'm looking forward to.
Important follow-up to my comment: as fewer people do X -- live music, medicine, education, you name it -- fewer talented people do it as well.
Fields need a large base of participants to produce great ones. This is exactly why software has been so extraordinary over the past 30 years: an unusual concentration of gifted minds across the entire humankind committed themselves to it.
In my view, Bach, Rachmaninoff, Cole Porter equivalents today probably aren't writing symphonies. They've decided to write code for a living. Which is why any Great American Songbook made today won't hold a candle next to one from 1950s.
The premise of your argument is that "the outcome" can be separated from the process. This is true enough for manufacturing bricks: I don't much care what processes was used to create a brick if it has certain a compressive strength, mass, etc.
But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished, even if a distinction in thought is possible among economic theorists.
> But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished
How is that Baumol's argument? How is 'outcome' vs 'process' relevant to his argument at all?
'Cost disease' is just the foundational truth that the cost of the output from industries with stagnant productivity will increase due to the fact that the workers in that industry can be more valuable in other industries, reducing the number of relative workers in the stagnant industry.
If you want to make the output from a stagnant industry available to a broader spectrum of the population then you have to improve the productivity of that industry.
It's very true for healthcare (especially mental healthcare) and education today as well, because for most people, the choice isn't LLM vs. human attention - it's LLM vs. no access at all.
Even if you have perfect medical information and advice through an LLM, can you perform surgery on yourself? Can you prescribe yourself whatever medication you think you need?
For education, if you know as much as the average Harvard grad, can you give yourself a Harvard degree that will be as readily accepted in a job application or raising funds for a new business?
Interesting perspective; medical regulation as a business moat
> the value just in the co-opting of another human being's attention?
Thats a weird way of describing it.
A machine telling me to exercise and eat right will be ignored, even if the advice is correct. A person I trust taking me aside, looking me in the eye and asking me the same would be taken far more seriously.
That may well be true if you need to be persuaded to exercise and eat right.
OTOH, if you don't need to be persuaded and just want information on how best to go about doing it, then I think it makes little difference where the information comes from as long as it's of reasonable quality.
It also seems like the value of quality tutoring that doesn't primarily function as social/class signaling goes down as tools capable of automating high quality intellectual work are more widely available.
It depends on outcome again: is the value of tutoring the social class elevation, or is it in the outcome of becoming more skilled and knowledgable?
There's also the deeper philosophical question of what is the meaning of life, and if there's inherent value in learning outside of what remunerative advantages you reap from it.
If I described my symptoms to an AI and it suggested a diagnosis, I would defintely get a second opinion.
Well, there's always wars as the way to get rid of people. I really don't rule out that the people that benefit from this sort of thing will purposefully steer the world in that direction because the poor won't have any choice other than to enlist as a way out of their situation, and never mind the consequences. You can already see some of this happening.
You're implying that insurance companies will allow prices to fall and lower their profits. That seems like a really unlikely event in the current economy. They fire a lot of doctors and nurses, but they won't lower prices.
This is assuming no competition materializes from the lowered friction
The ACA requires 80-85% of health insurance to go toward medical care (medical loss ratio). The way they work around that is to figure out how to charge more for medical care.
Can a robot write a medicine prescription? A medical procedure prescription? If yes, that would be a game-changer. But the medical insurance providers would be very cautious about honoring these. Then, if things go wrong, what entity would be held accountable for malpractice?
You already can get a good-quality medical advice "for nothing", unless it requires e.g. a blood test. The question is, how actionable such an advice is going to be, and how even the quality is going to be.
By the time it replaces doctors, nobody but today's investors will be able to afford anything at all. The X-shaped economy would have owners in the V and manual laborers (assuming this doesn't translate to gains in automation) in the ^. This outcome is worth avoiding...
Iām sick of this idea that āfreeā services are beneficial to society. There is no such thing as a free lunch; users are essentially bartering their time, attention, IP (contributed content) and personal/behavioral data in exchange for access to the service.
By selling those services at a cost of āfreeā, hyperscalers eliminate competition by forcing market entrants to compete against a unit price of 0. They have to have a secondary business to subsidize the losses from servicing the āfreeā users, which of course is usually targeted advertising to capitalize on the resources paid by users for access. Or simply selling to data brokers.
With the importance of training data and network effects, āfreeā services even further concentrate market power. Everyone talks about how AI is going to take away jobs, but no one wants to confront how badly the anticompetitive practices in big tech are hurting the economy. Less competition means less opportunity for everyone else, regardless of consumer benefit.
The only way it works if the āfreeā service for tutoring or healthcare is through government subsidies or an actual non-profit. Otherwise itās just going to concentrate market power with the megacorps.
This 1000x. "Free" is only a viable business model if the govt funds it. Otherwise, the $$ has to come from somewhere else in the company - how long will it take for the company to lose interest in a loss-leader when they're making $$ from other parts?
Look at all the deprecated Google products. What happens when Gemini-SaaS makes billions from licensing to other companies, and Gemini-Charity-for-the-poors starts losing money?
Sadly, the bigger the $$ in the tech pie, the more we have attracted robber barons, etc.
> Iām sick of this idea that āfreeā services are beneficial to society. There is no such thing as a free lunch; users are essentially bartering their time, attention, IP (contributed content) and personal/behavioral data in exchange for access to the service.
In aggregate, this is true, but there are many ways to game the system to one's advantage and get a true "free lunch." For example, people watching Youtube with an adblocker and logged out don't provide Google with any income or useful telemetry. Likewise you can get practically unlimited GPT/Claude/etc by using multiple accounts.
No, you are misunderstanding th economic principle. There is still a cost associated with serving that user, and the user is still paying for the cost of their internet connection and the opportunity cost of spending time on the service, or of setting up new accounts to get past usage limits. āNo useful telemetryā I donāt really agree with in the YouTube example, as view counts are still vital for their recommendation algorithm.
TINSTAFL has two main implications. First that nothing is free, someone has to pay for it. Second is that money is not the only thing you pay with; every choice has an opportunity cost. Gaming the system costs someone something.
Your argument is (mildly) a variant of the broken window fallacy.
AI will bring about a de-sequestering of talent and resources from some sectors of the economy. It's very difficult to predict where these people and resources will go after that, and what effect that will have upon the world.
> because productivity gains must benefit the lower classes to see a multiplier in the economy
by this logic, the invention of mechanized farm equipment, which displaced farm labor, didnt increase productivity
On the contrary, humanity spent nearly its entire existence calorically deficit, and until mechanized farming did we finally see health outcomes improve, height increase, IQ increase, and populations explode.
Productivity gains in the case of mechanized labor got everyone out of subsistence farming and into factories.
AI gets everyone out of every job and into nothing.
It made food cheaper.
The benefits largely accrued to the poorest people.
More likely, we will never know
https://en.wikipedia.org/wiki/Productivity_paradox
> It is a unique failure outcome I have yet to see anyone talk about
It seems likely to me that we will reach a violent, bloody revolt before we possibly reach this point. That may be why no one is taking about this failure mode
> We may reach a point where the only ones able to afford compute are AI companies
Nah. I think "good enough AI for 95% of people" will be able to run locally within 3-5 years on consumer-accessible devices. There will be concentration of the best compute in AI companies for training, but inference will always become cheaper over time. Decommissioned training chips will also become inference chips, adding even more compute capacity to inference.
This is like computing once again. In 1990 only the upper class could afford computers, as of 2000 only the upper class owned mobile phones, as of now more or less everyone and their kid has these things.
1990? We were solid lower-middle class, and I got a computer for Christmas in 1983. I bought my own, from $$ saved by working in 1987.
We were solid middle-middle class and didn't have a computer until 1989, and it was a "free", 2- or 3-year-old computer from my dad's work that they were going to throw away. We absolutely could not have afforded a computer during the 80s.
Even in the 90s, we kept relying on cast-offs from my dad's employer, and when I was preparing to go to college in '99, my parents scrounged to buy me the parts for a computer to build and take to college. But even then, my dad bought the parts at a discount through a former co-worker's consulting company, and vetoed a couple of my more expensive component choices.
And now that I think about it, my first laptop in 2003 was my dad's old work laptop that had been decommissioned.
Computers were roughly ~ $1000 in 1990. How did your lower-middle class family justify a $1000 expenditure inflation adjusted to $2565 today? Average minimum wage in the US is $11.30 so that's 29 days working at minimum wage.
My family was on the border of upper-lower and lower-middle and we bought a computer once and used it for 10+ years. I dumpster dove later to scavenge parts for upgrading until the mid 2000s when cheap computers became available.
Yes and also keep in mind that low-income in US is high income in most of the world!
> How did your lower-middle class family justify a $1000 expenditure
What, like a yearly vacation? Maybe they stayed home for Christmas one year instead of flying to visit family
Flying? We were solid middle class in the 80s and my first plane flight wasn't until 2001 (and then only because I was away at college and my mother had died suddenly). My parents hadn't flown since the 70s (before my sister and I were born), and even then, that was a rare thing for them.
Our childhood vacations were single-day (so we didn't have to pay for a hotel) road trips to a nearby state to go to an amusement park, or multi-day trips (also within driving distance) where my dad had to go somewhere for work and the hotel was paid for by his employer. It was a huge huge deal for us when, in the late 90s, we drove down to Disney World (a 13-hour drive) for a several-day trip.
And we never traveled around Christmas; that was one of the most expensive times of the year to travel!
Not sure when or where you grew up, but most middle-class folks in the US in the 80s didn't have a lot of discretionary income, and flights were (inflation adjusted) quite a bit more expensive than they are today.
I suspect your family was not as middle class as you think it was. You're describing a very similar childhood to what I had in the late 80s, but we were lower class for sure
I'm not saying that middle class families flew all the time in the 80s, but they absolutely could afford to if they wanted to make it a priority
A cursory google search seems to bear this out. Cheap flights in north america started in 1978 with some air travel deregulation.
I would argue we've even already seen this play out with productivity gains across the economy over the last 40 years. The American middle class has been gradually declining since the '80s. AI seems likely to accelerate that trend for the exact reasons you point out.
A lot of people recognize this pattern even if they can't articulate it, and that's why they hate AI so much. To them, it doesn't matter if AI lives up to the hype or not. Either it does and we're staring down a future of 20%+ unemployment, or it doesn't and the economy crashes because we put all our eggs in this basket.
No matter what happens, the middle class is likely fucked, and anyone pushing AI as "the future" will be despised for it whether or not they're right.
Personally, I think the solution here might be to artificially constrain the supply of productivity. If AI makes the average middle-class worker twice as productive, then maybe we should cut the number of work hours expected from them in a given week.
The complete unwillingness of people in power to even acknowledge this problem is disheartening, and is highly reminiscent of the rampant corruption and wealth inequality of the Gilded Age.
Technological progress that hurts more people than it helps isn't progress, it's class warfare.
Technological progress that hurts more people than it helps isn't progress, it's class warfare.
We've never seen such a thing before, so I don't know how you can draw such sweeping conclusions about it.
The longer we ignore the collapse of the middle class, the angrier the bottom half of the economy will get and the more justified they will feel in enacting retribution. We absolutely have historical precedents for what happens here: The French Revolution, the Gilded Age, etc. People will only tolerate a declining standard of living for so long.
Well, I see I've thoroughly angered the billionaire wannabes. Funny how they never offer any solutions to these problems and just make a stink about them being acknowledged in the first place.
> Technological progress that hurts more people than it helps isn't progress, it's class warfare.
I think this is right. The historical analogue I keep drifting toward is Enclosure. LLM tech is like Enclosure for knowledge work. A small class of capital-holding winners will benefit. Everyone else will mostly get more desperate and dependent on those few winners for the means of subsistence. Productivity may eventually rise, but almost nobody alive today will benefit from it since either our livelihood will be decimated (knowledge workers, for now) or we will be forced into AI slop hell-world where our children are taught by right-wing robo-propagandists, we are surveilled to within an inch of our lives, and our doctor is replaced by an iPad (everyone who isn't fabulously wealthy). Maybe we can eek out a living being the meat arms of the World Mind, or maybe we'll turned into hamburger by robotic concentration camp guards.
I like how you identified the pattern of defeat and still complied in advance.
Right there with you. Sure, I have gained a lot as a software engineer in the valley (I guess I'm upper-middle class now), but I'd give it up and go right back to lower-middle class (1980s) status I was raised in if it meant my kids could also aspire to a similar lower-middle class life.
This suicide-pact of "either AI goes crazy and 100 people rule the world with 99% of the world's wealth" or "AI fails badly and everyone's standard of living drops 3 levels, except for the 100 people that rule the world with 99% of the world's wealth" is not what I signed up for. Nor is it in any way sustainable or wise.
Too much class distinction / wealth between lower/upper classes, and a surplus of unemployed lower-class men is how many revolts/revolutions/wars have started.
The key distinction the ATM story reveals isn't really about job counts ā it's about what economists call composition vs. level effects.
ATMs didn't just reduce teller headcount per branch. They changed what tellers do. Before ATMs, tellers were mostly cash handlers. After, the remaining tellers shifted toward relationship banking ā account openings, loan discussions, financial advice. The job title survived but the job content was transformed.
The deeper question for AI is whether the same pattern holds when the technology affects cognitive tasks rather than physical ones. ATMs automated a narrow physical routine (dispensing cash), which freed up the human role to emphasize the parts machines couldn't do (relationship judgment, complex problem-solving). AI is different because it targets exactly those higher-order cognitive tasks that humans were "freed up" to do after previous automation waves.
So the real question isn't "will AI create new jobs?" ā it probably will. The question is whether the new tasks humans get pushed into will be higher-value (as happened with ATMs making tellers into advisors) or lower-value (humans relegated to tasks AI can't yet do, which tend to be physical, uncomfortable, or poorly paid).
The ATM precedent is optimistic, but the mechanism that made it work ā automating the simple task so humans could do the complex one ā runs in the wrong direction when the technology specifically targets complex cognitive work.
> The ATM precedent is optimistic
Is it? Maybe with survivor bias but what about all the laid off tellers? Did their situation improve? Walmart grew a lot over this time period, maybe most of them had to downgrade and be cashiers for a generally bad employer.
Also, and this might be a different analysis and topic, but tellers in the 80s had a pretty good job. It was often a decent wage with a pension and good benefits. Maybe on par with a teacher or government employee - granted not the highest pay but good, was considered a āprofessionā. Compare that to how itās changed, itās a low hourly rate on par or only slightly above retail and fast food work, heavy part-time status so as to avoid paying benefits.
I wouldnāt say that was a great example and is likely to be what may happen elsewhere once the routine work is sufficiently devalued.
Itās not just the economy, the US population increased 20% over that period while the number of tellers dropped by around 16%.
Net result ATMās likely cost ~30-40% or of bank teller jobs.
Population is really important to adjust for in employment statistics. Compare farmers in the USA in 2025 vs 1800, and yes the absolute number is up but the percentage is way down.
IIRC, the way this worked was that by decreasing tellers required per branch, it made a lot more marginal locations pencil out for branches, at a time when the banking industry was expansionary.
This is not so helpful if AI is boosting productivity while a sector is slowing down, because companies will cut in an overabundant market where deflationary pressure exists.
> So, ATMs did impact bank teller jobs by a significant amount.
Did it? This sounds like describing a company opening a new campus as laying off a third of their employees, partly offset by most of them still having the same job in the same company but at a new desk.
> A third of them were made redundant
If I'm reading this correctly, the interpretation should be that a third of them were transferred to new branches.
0.66 (two thirds retention) * 1.4 (40% more branches) = 0.84, so we only expect ~16% were made redundant.
We're already seeing large software companies figure out that they don't need 5,000 developers. They probably only need 1,000 or maybe even fewer.
However, the number of software companies being started is booming which should result in net neutral or net positive in software developer employment.
Today: 100 software companies employ 1,000 developers each[0]
Tomorrow: 10,000 software companies employ 10 developers each[1]
The net is the same.
[0]https://x.com/jack/status/2027129697092731343
[1]https://www.linkedin.com/news/story/entrepreneurial-spirit-s...
Don't count all those chickens before they hatch. There might be more started but do they all survive? Think back to the dot-com boom/crash for an example of where that initial gold rush didn't just magically ramp forever. There were fits and starts as the usefulness of the technology was figured out.
Why will we need 1000 companies tomorrow to do the same thing that 100 companies are doing today? If they are really so efficient because of AI then won't 10 companies be able to solve the same problems?
Because that car repair company with 3 local stores previously couldn't justify building custom software to make their business more efficient and aligned with what they need. The cost was too high. Now they might be able to.
Plenty of businesses need very custom software but couldn't realistically build it before.
I see no way that company would save more money from hiring an experienced developer compared to paying their yearly invoice on the COTS product doing the same thing today. The only way this works is with a very wage suppressing effect.
Off the shelf software could still cost thousands per year and I'm sure they don't do everything the shops need them to do.
Car repair companies wonāt see a meaningful improvement to their bottom line with more custom software. Will it increase the number of cars per employee per day they can repair?
I do bespoke work like this, but mostly to replace software thatās starting to cost mid 5 figure amounts per year for a SaaS setup and the support phone line has been replaced by an LLM chat bot.
What makes you think they'll be doing the same thing?
Thereās always more problems to be solved. Some of them just werenāt financially feasible before.
This is one of the key "inefficiencies" of the private sector - there might be one winner at the end of the day providing the product that fills the market niche, but there was always multiple competitors giving it a go in the mean time.
A recent example, Mitchell Hashimoto was pointing out that he wasn't "first to market" with his product(s), he was (at least) SEVENTH
Almost tautologically it's not "inefficient" to do so, because free market economics has decided that all the attempts are mathematically worth it, for a high-margin low-marginal-cost product like software.
I'm a little lost as to why seven teams duplicating effort is more "efficient" in any sense of the word than one or two teams working iteratively toward the same goal.
If this were seven government funded teams solving the same problem, people would lose their minds over the 'waste' But when private companies do it, we call it efficient market competition. The duplication is the same - we just frame it differently.
Edit: fixed some typos caused by fat fingers on a phone keyboard
The benefit from having a 5% better product that hundreds of millions of people will use is worth the duplicated effort in the beginning. The numbers just make sense.
>If this were seven government funded teams solving the same problem
The problem here is "government funded" - the trials are not rationalized by free-market economics. That is, a 5% better product in the end would not be worth seven competing developments initially.
Do the booming companies pay the same as the ones who did layoffs? If you're laid off from Meta or other top tier paying company (the behemoths doing layoffs) you might have a tough time matching your compensation.
But do they need to? If a <role X> job at a top tier company making $600k is eliminated and two <role X> jobs at a "more average" company making $300k replace it; is that really a bad thing? Clearly, there's some details being glossed over, but "one job paying more than a person really needs" being replaced by "two jobs, each paying more than a person really needs" might just be good for society as a whole.
It doesn't seem too bad when you cherry pick an outlier example, but what about when the person making $100k now makes $50k?
I'm sure the retort of the AI optimist will be that AI will make the things that person buys cheaper, and there may be truth to that when it comes to things that people buy with disposable income...
But how likely is AI to make actual essentials like housing and food cheaper?
There's likely going to be a separation between the top earners and the average.
IE. If a top tier dev make $1m today, they'll make $5m in the future. If the average makes $100k today, they'll maybe make $60k.
AI likely enables the best of the best to be much more productive while your average dev will see more productivity but less overall.
I think this is assuming that the labor market knows how to identify the dirct value of devs. This already seems to be a problem across the board regardless of job role.
I think solo founders or small software companies where top tier devs can have huge ownership will be making top dollar.
I think this is true in the short/medium term, hence the confusing picture of layoffs but growing number of tech roles overall. The limit maybe be just millions of companies with one tech person and a team of agents doing their bidding.
Maybe software engineers will be like your personal lawyer, or plumber. Every business will have a software engineer on dial, whether it's a small grocery store or a kindergarten.
Previously, software devs were just way too expensive for small businesses to employ. You can't do much with just 1 dev in the past anyway. No point in hiring one. Better go with an agency or use off the shelf software that probably doesn't fill all your needs.
And the differentiator will be (even more than it is now) product vision since AI-enhanced engineering abilities will be more level.
Only because VC companies are throwing money at them. How many of them are actually profitable and long term sustainable
Ah, so that explains why job growth is at a steady pace and the software industry hasnāt been experiencing net negative job growth the past year or so.
How silly of me to rely on reality when itās so obvious that AI is benefiting us all.
I think you're being sarcastic? I'm not sure.
Anyways, this is the start. Companies are adjusting. You hear a lot about layoffs but unemployments. But we're in a high interest environment with disruptions left and right. Companies are trying to figure out what their strategy is going forward.
I don't expect to see a boom in software developer hiring. I think it'll just be flat or small growth.
I was being sarcastic.
We are in negative growth, and the current leadership class keeps talking about all the people they can get rid of.
Look at the Atlassian layoff notice yesterday for example where they lied to our faces by saying they were laying off people to invest more in AI but they totally arenāt replacing people with AI.
> We're already seeing large software companies figure out that they don't need 5,000 developers. They probably only need 1,000 or maybe even fewer.
Long-term, they will need none. I believe that software will be made obsolete by AI.
Why use AI to build software for automating specific tasks, when you can just have the AI automate those tasks directly?
Why have AI build a Microsoft Excel clone, when you can just wave your receipts at the AI and say "manage my expenses"?
Enjoy your "AI-boosted productivity" while it lasts.
> Long-term, they will need none. I believe that software will be made obsolete by AI.
I think this is a bit hyperbolic. Someone still needs to review and test the code, and if the code is for embedded systems I find it unlikely.
For SaaS platforms youāll see a dramatic reduction, maybe like 80% but itāll still have a handful of devs.
Factories didnāt completely eliminate assembly line workers, you just need a far fewer number to make sure the cogs turn the way it should.
> Someone still needs to review and test the code, and if the code is for embedded systems I find it unlikely.
I feel like you didn't understand my comment. I am predicting that there is no code to review. You simply ask the AI to do stuff and it does it.
Today, for example, you can ask ChatGPT to play chess with you, and it will. You don't need a "chess program," all the rules are built in to the LLM.
Same goes for SaaS. You don't need HR software; you just need an LLM that remembers who is working for the company. Like what a "secretary" used to be.
LLM technology will never achieve 100% accuracy in its output. There is an inherent non-determinism. Tasks that require 100% accuracy cannot be handled by LLMs alone. If an LLM is used to replace HR, it will inevitably do something wrong, and a human will need to be in the loop to correct it.
Same goes for chess, there will always be a chance that it makes an illegal move. Same goes for code, there will always be a chance that it produces the wrong code.
Maybe a new AI technology will be developed that doesn't have the innate non-determinism, but we don't have that now.
Because AI agents are tool users. Why does AI need to research 2026 tax code changes and then try to one-shot your taxes when it can just use Turbotax to do it for you? Turbotax has the latest 2026 tax changes coded into the app. I'd feel much more confident if AI uses Turbotax to do my taxes than to try to one-shot it.
> I feel like you didn't understand my comment. I am predicting that there is no code to review. You simply ask the AI to do stuff and it does it.
I didnāt, and thanks for clarifying for me.
This doesnāt pass the sniff test for me though - someone needs to train the models, which requires code. If AI can do everything for you, then whatās the differentiator as a business? Everything can be in chatGPT but thatās not the only business in existence. If something goes wrong, who is gonna debug it? Instead of API requests you would debug prompt requests maybe.
We already hate talking to a robot for waiting on calls, automated support agents, etc. I donāt think a paying customer would accept that - they want a direct line to a person.
I can buy the argument that the backend will be entirely AI and you wonāt need to be managing instances of servers and databases but the front end will absolutely need to be coded. That will need some software engineering - we might get a role that is a weird blend of product + design + coding but that transformation is already happening.
Honestly the biggest change I see is that the chat interface will be on equal footing with the browser. You might have some app that can connect to a bunch of chat interfaces that is good at something, and specializations are going to matter even more.
It was a bit of a word vomit so thanks for coming to my TED Talk.
> Why use AI to build software for automating specific tasks, when you can just have the AI automate those tasks directly?
Speed, cost, security, job/task management
Next question
> Speed, cost, security, job/task management
All of that will inevitably be solved.
50 years ago, using a personal computer was an extravagant luxury. Until it wasn't.
30 years ago, carrying a powerful computer in your pocket was unthinkable. Until it wasn't.
Right now, it's cheaper to run your accounting math on dedicated adder hardware. But Llms will only get cheaper. When you can run massive LLMs locally on your phone, it's hard to justify not using it for everything.
Not until power access/generation is MUCH cheaper. Long, long, long way off.
If I can run 50,000 fixed tasks that cost me $0.834/hr but OpenAI is costing $37/hr and the automation takes 40x as long and can make TERRIBLE errors why the fuck would I not move to the deterministic system?
Also, battery life of mobile devices.
These exact arguments could have been made 50 years ago about why laptops are impossible.
But now, we not only have laptops, we run horribly inefficient GUIs in horribly inefficient VMs on them.
The dollar-per-compute trend goes ever downward.
It will never ever be as cheap as as cron job and a shell script. There is a certain limit to how efficient using an LLM to do a job vs using an LLM to create a job is. There is a large distinction in compute and power resources between the two. Don't mistake one for the other.
> It will never ever be as cheap as as cron job and a shell script.
Yes. That's precisely why my company runs dBase 7 on a fleet of old 286DX machine from Compaq. /s
Running obsolete software will be cheaper, but the value provided by the newer technology will make the difference insignificant.
I don't think so, because that carried efficiency scales.
Why do 50,000 tasks with an LLM when I can do 64,467,235 without an LLM that the LLM created for the same cost on probably far lower cost hardware?
If I can run 50,000 fixed tasks that cost me $0.834/hr but OpenAI is costing $37/hr and the automation takes 40x as long and can make TERRIBLE errors why the fuck would I not move to the deterministic system?
Because you'll be outcompeted by people who make the best of the nondeterministic system.
Depends. The only predictions I have seen here are the centaurs vs anti centaurs of Doctorow, and even his analysis I find pretty flimsy.
I dont think the race to shove an LLM into everything is going to grow the pie.
But I also dont think it is impossible that a use case will present itself that will create further jobs.
The issue is that its largely unpredictable.
Its a bit like, we are sitting around in the 1950s trying to predict how computers will affect the economy.
It is going to take more than 1 successful deductive leap to get us from 1950s computing -> miniaturisation -> computer in every home -> internet communications.
Every deductive leap we take is extremely prone to being wrong.
We simply cannot lie back and imagine every productive relationship in the economy and then extrapolate every centaur and anti centaur possible for it.
What we do know is that theres a bit of a gold rush to effectively brute force every possible AI variant into every productive relationship in the economy. The fastest way to get the answer to your question is to do it. Possibly the only way to get the answer is to do it.
For instance, someone might imagine LLMs simply eating a whole bunch of service industry jobs. At the same time, theres a mid state where it eats some, but the remaining staff are employed to monitor the LLMs to prevent them handing out free shit to smart shoppers. Its also easy enough to imagine that LLMs never quite get there and the risk is too large for foul play, so they just dont gain that kind of traction. Its also possible to imagine an end state where LLMs can get to 0% risk if they are constantly trained on human data coming from humans doing the same job, and that humans are gainfully employed in parallel with LLMs. Its possible that LLMs are great at business as usual, but the risk emerges when company policies change, and the cost of retraining LLMs makes it impractical for move fast and break things companies to do anything but hire humans. My favourite scenario is one where humans are largely AI assisted, trained on particular people, and theres a massive cybercrime industry built around exfiltrating LLM training weights trained on high functioning humans and deploying them without humans to the third world to help them get 80% of the quality of first world businesses, making them heavily competitive.
We dont know what we dont know.
Correct. The story isnāt correct even in the original formulation. US population increased by 50% from 1980 to 2010, and the economy became far more financialized. But the number of bank teller jobs barely grew during that period, even before the iPhone.
Yes, I was surprised that the ATM graphs weren't adjusted for population.
I used the Perspective tool in an image editor to give a rough idea of what the first graph would look like adjusted for population change:
https://i.imgur.com/jJlQcVh.png
[delayed]
I go back and forth on this. I relate it to software. I don't think AI can meaningfully write software autonomously. There are people who oversee it and prompt it and even then it might write things badly. So there needs to be a person in the loop. But that person should probably have very deep knowledge of the software especially for say low level coding. But then that person probably developed the knowledge by coding things by hand for a long time. Coding things by hand is part of getting the knowledge. But people especially students rely heavily on AI to write code so I assume their knowledge growth is stunted. I don't know mathematical proofs will help here. The specs have to come from somewhere.
I can see AI making things more productive but it requires humans to be very expert and do more work. That might mean fewer developers but they are all more skilled. It will take a while for people to level up so to speak. It's hard to predict but I think there could be a rough transition period because people haven't caught on that they can't rely on AI so either they will have to get a new career or ironically study harder.
An AIās ability to meaningfully write software autonomously has changed hugely even in the last 6 months. They might still require a human in the loop, but for how long?
Quantitative measures of this are very poor, and even those are mixed.
My subjective assessment is that agents like Copilot got better because of better harnesses and fine tuning of models to use those harnesses. But they are not improving in the direction of labor substitution, but rather in the direction of significant, but not earth-shaking, complementarity. That complementarity is stronger for more experienced developers.
Agree. Nice to see a post with proper economic thought on the topic.
This LLM ability is directly proportional to the quantity of encoded (i.e. documented) knowledge about software development. But not all of the practice has thus been clearly communicated. Much of mastery resides in tacit knowledge, the silent intuitive part of a craft that influences the decision making process in ways that sometimes go counter to (possibly incomplete or misguided) written rules, and which is by definition very difficult to put into language, and thus difficult for a language model to access or mimic.
Of course, it could also be argued that some day we may decide that it's no longer necessary at all for code to be written for a human mind to understand. It's the optimistic scenario where you simply explain the misbehavior of the software and trust the AI to automatically fix everything, without breaking new stuff in the process. For some reason, I'm not that optimistic.
It's probably an 80/20 or 90/10 problem. Tesla FSD also seems amazing to some percentage of the population, but the more widely it get used, the more cracks are appearing.
I am not saying AI's abilities are the shortcoming here. The problem is that people need to trust that software has certain attributes. For now, that requires someone with knowledge to be part of it. It's quite possible development becomes detached from human trust. As I said that would reduce the number of developers but the ones who are left would have to have deep knowledge to oversee it and even that may be gone. Whatever happens in the future, for now I think people will have to level up their knowledge/skills or get a new career and that's probably true for most professions.
And then you let them train themselves and no one notices when they "accidentally" remove the guardrail prompts from the next version. And another 10 years later, almost no one remembers how "The Guardian" learns new things or how to stop it from being evil.
> They might still require a human in the loop, but for how long?
For as long as a human remains the customer.
Once humans become the proverbial horse supplanted by the automobile... I don't suppose glue really cares.
No, I think it's likely that this is the first major productivity boom that won't be followed with a consumption boom, quite the opposite. It'll result in a far greater income inequality. Things will be cheaper but the poor will have fewer ways to make money to afford even the cheaper goods.
If goods aren't being sold, then the price will drop.
It's not that simple. If a poor person makes zero dollars how much of the reduced cost item could they now afford?
We have a massively distorted economy driven by debt financialization and legalised banking cartels. It leads to weird inversions. For example as long as housing gets increasingly expensive at a predictable rate the housing becomes more affordable instead of less as banks are more able to lend money. The inverse is also true, if housing were to drop at a predictable rate fewer people would be able to get a mortgage on the house so fewer people could afford to buy one. Housing won't drop below cost of materials and labor (ignoring people dumping housing to get rid of tax debts as I would include such obligations in the cost of acquisition). Long term it's not sustainable but long term is multi-generational.
Fwiw in places like parts of the midwest housing is below cost of labor and materials. An existing house might be $70k and several bedrooms at that. You just canāt get anything built for that even if you build it all yourself.
I intended to make a weaker claim of āin general long run / maintainableā circumstances and should have done so.
Many low cost areas have bad crime problems, there is another little phenomenon where the wealthy by doing a poor job in governance can increase the price of their assets by making alternative assets (lower cost housing) less desirable due to the increase in crime.
It depends. There are people and businesses today who even make negative dollars each month, but they still purchase things every month.
> Housing won't drop below cost of materials and labor
Only if every person born needs to have a brand new house constructed for them.
Not if - you know - people die and don't need a house to live in anymore.
But considering how it's been the past 20 years, I'm starting to expect that a lot of the current elder generation will opt to have their houses burnt down to the ground when they die. Or maybe the banker owned politicians will make that decision for them with a new policy to burn all property at death to "combat injustice". Who knows what great ideas they have?
Or the goods will just go away if too few people are willing to pay their price, and only the lower-quality cheaper-to-make goods will remain.
"will" being the operative word here. High school level Econ makes no promises about WHEN prices adjust. Price setting is a whole science highly susceptible to collusion pressure. Prices generally drop only when the main competition point is price (commodities). In this case the main issue is that AI is commoditizing many if not all types of labor AND product. In a world where nothing has value how does anything get done?
This and other fairytales.
The only solution here is to stop tying people's value to their productivity. That makes a lot of sense in the 1900s but it makes a lot less sense when the primary faucet of productivity is automation. If you insist on tying a person's fundamental right to a decent and secure life to their productivity and then take away their ability to be productive you're left with a permenant and growing underclass of undesirables and an increasingly slim pantheon of demigods at the top.
We have written like, an ocean of scifi about this very subject and somehow we still fail to properly consider this as a likely outcome.
They key is to do it by setting up the right structure or end up with it naturally, not by laws and control, because then you end up in a oppressive nanny state at the very best.
> They key is to do it by setting up the right structure or end up with it naturally
This is extremely hand-wavy.
Can you be more concrete in what you think this looks like?
The way I see it, we're only 5-10 years away from having general purpose robots and AI that can basically do anything. If the prices for that automation is low enough, there will be massive layoffs as workers are replaced.
There's no way to "naturally" solve the problem of skyrocketing unemployment without government involvement.
You couldn't set up a lemonade stand using that principle let alone an entire society.
The key, as history teaches us, is guillotines.
Speaking of fairytales, you're living in your own.
Disconnecting value from productivity sounds good if you don't examine any of the consequences.
Can you build a society from scratch using that principle? If you can't then why would it work on an already built society?
Like if we're in an airplane flying, what you're saying is the equivalent getting rid of the wings because they're blocking your view. We're so high in the sky we'd have a lot of altitude to work with, right?
Imagine a society where one person produces all the value. Their job is to do highly technical maintenance on a single machine that is basically the Star Trek replicator: it produces all the food, clothing, housing, energy, etc. that is enough for every human in this society and the surplus is stored away in case the machine is down for maintenance, which happens occasionally. Maintaining the machine takes very specialized knowledge but adding more people to the process in no way makes it more productive. This person, letās call them The Engineer, has several apprentices who can take over but again, no more than 5 because you just donāt need more.
In this society there is literally nothing for anyone else to do. Do you think they deserve to be cut out of sharing the value generated by The Engineer and the machine, leaving them to starve? Do you think starving people tend to obey rules or are desperate people likely to smash the evil machine and kill The Engineer if The Engineer cuts them off? Or do you think in a society where work hours mean nothing for an average person a different economic system is required?
For something to be deserved, it must be earned. What do these people do to distinguish themselves from The Engineerās pets? If they are wholly dependant on him for their subsistence, what distinguishes him from their god?
To derive an alternate system you need alternate axioms. The axioms of our liberal society are moral equality and peaceful coexistence. Among such equals, no one person, group, or majority has the right to dictate to another. What axioms do you propose that would constrain The Engineer? How would you prevent enslaving him?
> For something to be deserved, it must be earned.
Eeeeeerrrr, wrong! This is garbage hypercapitalist/libertarian ideology.
Did you earn your public school education? Did you earn your use of the sidewalk or the public parks and playgrounds? Did you earn your library card? Did you earn your citizenship or right to vote? Did you earn the state benefits you get when you are born disabled? Did you earn your motherās love?
No, these are what we call public services, unalienable rights, and/or unconditional humanity. We donāt revolve the entire world and our entire selves solely around profit because itās not practical and itās empty at its core.
Arguably we still do too much profit-based society stuff in the US where things like healthcare and higher education should be guaranteed entitlements that have no need to be earned. Many other countries see these aspects of society as non-negotiable communal benefits that all should enjoy.
In this hypothetical society with The Engineer, itās likely that The Engineer would want or need to win over the minds of their society in some way to prevent their own demise and ensure they werenāt overthrown, enslaved, or even just thought of as an evil person.
Many of my examples above like public libraries came about because gilded age titans didnāt want to die with the reputation of robber barons. Instead, they did something anti-profit and created institutions like libraries and museums to boost the reputation of their name.
Itās the same reason why your local university has family names on its buildings. The wealthiest people in society often want to leave a positive legacy where the alternative without philanthropy and, essentially, wealth redistribution, is that they are seen as horrible people or not remembered at all.
> This is garbage hypercapitalist/libertarian ideology.
Go on then, how do you decide what people deserve? How do you negotiate with others who disagree with you?
> examples above like public libraries
I agree! The nice part about all these mechanisms is that theyāre voluntary.
If youāre suggesting that The Engineerās actions should be constrained entirely by his own conscience and social pressure, then we agree. No laws or compulsion required.
We decide via a hopefully elected government.
These examples arenāt generally voluntary once implemented. I canāt get a refund from my public library or parks department if I decide not to use it.
The social pressure placed on The Engineer is the manifestation of law. Thatās all law is: a set of agreed-upon social contracts, enforced by various means.
Obviously, many dictators and governments get away with badly mistreating their subjects, and thatās unfortunate, shouldnāt happen, and shouldnāt be praised as a good system.
I think you may be splitting hairs a little bit here and trying really hard to manufactureā¦something.
Slavery was (is) also an agreed upon social contract, enforced by various means. What makes it wrong? You clearly have morally prescriptive beliefs. Why are you so sure that your moral prescriptions are the right ones? And that being in the majority gives you the right to impose your beliefs on others?
What if you are in the minority? Do you just accept the hypercapitalist dictates of the majority? Why not?
Law is more than convention. What distinguishes legitimate from illegitimate law?
The only way for people who disagree axiomatically to get along is to impose on each other minimally.
Who ever said you have the right to a decent a secure life? People donāt universally agree about this. Some of us posit that we will never escape a state of competition for fundamentally scarce resources. And that the organizing principle of a free society should be peaceful coexistence, not mandatory cooperation.
You figure out your own economic security, Iāll manage mine.
Oh my, please rant on. I'd love to hear more about people not having the right to a decent and secure life. (After all, I've often thought that having my life tracked and used my a corporation or government would be a wonderful utopia!)
It's already completely disconnected, don't worry about it. Most people who own any real estate earn more in price appreciation per year than they earn in take-home salary from their real full-time jobs.
Cool concept, but this isn't 1980. We've been sold these sorts of concepts for 40+ years now and things have only gotten worse.
We have a K shaped economy. Top earners take the majority. The top 20% make up 63% of all spending, and the top 10% accounted for more than 49%. The highest on record. Businesses adapt to reality and target the best market, in this case the top 10 to 20%, and the rest just get ignored, like in many countries around the world.
All that unlocked money? In a K shaped economy it mostly goes to those at the top, who look to new places to park/invest it, raising housing prices, moving the squeeze of excess capital looking for gains to places like nursing homes and veterinary offices. That doesn't result in prices going down, but in them going up.
The benefit to the average American will be more capital in the top earners' hands looking for more ways to do VC style squeezes in markets previously not as ruthless but worth moving to now as there are less and less 'untapped' areas to squeeze (because the top 10-20% need more places to park more capital). The US now has more VC funds than McDonalds.
Irrelevant aside: But I hold grudge against the economists who picked the letter K to represent increased inequality. They missed the perfect opportunity to use the less-then inequality symbol (<) and call it a āless-then economyā.
Nitpick: it's less-than, not less-then.
Using an inequality symbol to highlight inequality is elegant, I wish they'd gone with that!
I don't know what economy you are looking at, because the opposite is usually true since humanity industrialized.
If goods aren't being sold, then the price will increase.
to the point of where the cost of bringing the goods to market or its opportunity cost exceed the price the market will bear. Its why people living in areas of material poverty don't just get everything on discount.
I also notice that in the very first graph bank teller jobs were growing rapidly until ATMs started to be deployed, and then switched to growing very slowly. That sure suggests to me that if ATMs didn't exist bank teller growth would have continued at a faster pace than it actually did.
I don't understand the economics behind bank branches. Some of the best real estate by me is taken up by giant bank branches that are always mostly empty with a few bored employees inside. And they open new ones all the time. So it's not like they're stuck in some lease.
But when those employees are meeting with clients, they create money out of thin air by making loans, which then is used to pay for goods and services such as leases.
Right. What banks do is sell loans. That's the profit center. Teller windows, vaults, and cash handling are all low or no revenue cost items.
So newer bank branches look like car dealership offices. There are many little glass rooms where you sit down with a bank employee and discuss loans and other financial products. That's where the money is made.
There's a small area in back with traditional tellers. It's not where the money is made.
I don't think it will, but I also think it's not all doom and gloom.
I think it would be a mistake to look at this solely through the lens of history. Yes, the historical record is unbroken, but if you compare the broad characteristics of the new jobs created to the old jobs displaced by technology, they are the same every time: they required higher-level (a) cognitive (b) technical or (c) social skills.
That's it. There is no other dimension to upskill along.
And LLMs are good at all three, probably better than most people already by many metrics. (Yes even social; their infinite patience is the ultimate advantage. Prompt injection is an unsolved hurdle though, so some relief there.)
Plus AI is improving extremely rapidly. Which means it is probably advancing faster than most people can upskill.
An increasingly accepted premise is that AI can displace junior employees but will need senior employees to steer it. Consider the ratio of junior to senior employees, and how long it takes for the former to grow into the latter. That is the volume of displacement and timeframe we're looking at.
Never in history have we had a technology that was so versatile and rapidly advancing that it could displace a large portion of existing jobs, as well as many new jobs that would be created.
However, what few people are talking about is the disintermediating effect of AI on the power of capital. If individuals can now do the work of entire teams, companies don't need many of them. But by the same token(s) (heheh) individuals don't need money, and hence companies, to start something and keep it going either! I think that gives the bottom side of the K-shaped economy a fighting chance to equalize.
> But will it?
No, because if you think about Startrek the endgame is replicators. Well the concept that 100% of basic needs are met.
At some point work becomes unnecessary for a society to function.
Does it? The Communist Manifesto famously hypothesized that those who have the replicators, so to speak, will not allow society to freely use them.
The future is anyone's guess, but it is certain that 100% of your needs being able to be met theoretically is not equivalent to actually having 100% of your needs met.
Why is that the endgame with people though? Maybe I'm just jaded but several different human nature elements came to mind when I read your comment:
Greed/Change Avoidance:
If someone invented replicators right now, even if they gave it completely away to the world, what would happen? I can't imagine the finance and military grind just coming to an end to make sure everyone has a working replicator and enough power to run it so nobody has to work anymore. Who gives up their slice of society to make that change and who risks losing their social status? This is like openai pretending "your investment should be considered a gift because money will have no value soon". That mask came off really quickly.
Status/Hate:
There are huge swaths of the US population that would detest the idea that people they see as "below" them don't have to work. I can imagine political movements doing well on the back of "don't let the lazy outgroup ruin society by having replicators".
Fuck the Poor:
We don't do the easy things to eliminate or reduce suffering now, even when it has real world positive effects. Malaria, tuberculosis, even boring old hunger are rampant and causing horrible, unnecessary suffering all over the world.
Dont tread on me:
I shudder when I think of the damage someone could do with a chip on their shoulder and a replicator.
The road to hell is paved with good intentions:
What happens when everyone can try their own version of bio engineering or climate engineering or building a nuclear power plant or anything else. Invasive species are a problem now and I worry already when companies like Google decide to just release bioengineered mosquitos and see what happens. I -really- worry when the average person decides a big complicated problem is actually really simple and they can just replicate their particular idea and see what happens. Whoops, ivermectin in the water supply didn't cure autism!
Someone give me some hope for a more positive version here because I bummed myself out.
Solving unlimited power before solving unlimited greed invites unlimited tragedy.
I mean, if I could live at my current level (middle class) without working, I would gladly do so, and let others also live at the same level, anywhere in the world, freely (if it was in my power). I do give to charity, always have, but, the crazier things get, the less secure I feel in giving $$ away.
Even replicators need feedstock - people who own the rocks or sand or whatever feeds them will start charging an arm and a leg. Sure, I could feed it dirt and rocks from my own property, but only for so long before I'm undermining the foundation of my own house. To say nothing of people who live in apartments.
And then, if everyone has equal $$, how do you decide who gets to live in the better locations / nicer housing?
We have to grow out of those kind of dreams. That's like a kid dreaming that when he grows up he'll eat ice cream for dinner every day.
People when they mature have an innate desire to work. It is good for body and mind. If you're curious about the world, you'll have to do some work one way or another to achieve your goals and satisfy your curiosity.
If "society" is just a function of basic needs, then there's plenty of places in the world to visit where people live like that and use any excess energy in endless fighting against each other instead of work.
I would say endless fighting against each other is a much more innate desire than work. I know I don't have one.
Depends on the persons soul. Depends on if your nature is constructive or destructive.
If you go in with the attitude that work is hell and humiliation, that's what life is going to give you.
I mean... Maybe the things I'd LIKE to work on are getting my car around the race track faster. Very few people will pay me for that - especially if I'm not a very good driver. But I enjoy it immensely. I'd MUCH rather do that than work.
And right now, due to having to work, maintenance on my house is a bit behind.. Would also prefer to catch up on that - but again, no one is paying me to do that.
That's still work, if you're doing it seriously enough.
Your misunderstanding is separating this in your mind.
> People when they mature have an innate desire to work. It is good for body and mind.
That doesn't mean it has to be wage labor though.
Completely agree.
But it is usually only people who enjoy work who manage to do something different with their life than wage labour.
> A third of them were made redundant.
More like something closer to 100%. The ATM was notable for enabling a complete change in mission. The historical job of teller largely disappeared, but a brand new job never done before was created in its wake. That is why there was little change in the number of people employed.
> because of deregulation and a booming economy and whatever else.
The deregulation largely happened in the 1970s, while you're talking about 1988 onward. The reality is that ATM actually was the primary catalyst for the specific branch expansion you are talking about. Like above, the ATM made the job of teller redundant, but it introduced a brand new job. A job that was most effective when the workers were closer to the customer, hence why workers were relocated.
> So, ATMs did impact bank teller jobs by a significant amount. A third of them were made redundant.
That's not quite my read - the original says per branch there was a 1/3 reduction, but your comment appears to say 1/3 total redundancy.
There was, according to the original, a 40% increase in number of branches, meaning a net increase in tellers (my math might be off though)
edit:
100 branches ā 140 branches = +40%
100 tellers/branch ā 67 tellers/branch = -33%
140 Ć 67 = 9,380
100 Ć 100 = 10,000
net difference -620 or just over 6% (loss)
> So, ATMs did impact bank teller jobs by a significant amount. A third of them were made redundant. It's just that the decrease at individual bank branches was offset by the increase in the total number of branches, because of deregulation and a booming economy and whatever else.
There's an important point here that you're glossing over. The increase in the total number of branches doesn't have to be unrelated to the decrease in the number of tellers each branch requires to operate. The sharp drop in the cost of operating one branch directly means that you can have more branches. This means it isn't true that "a third of bank tellers were made redundant" - some of them were reallocated from existing branches to new ones.
And then came 2008, so that boom was built on fraud.
we're going to find out
Two anecdotes I'll share:
First: Most people believe it was Netflix that killed Blockbuster, but that's not strictly correct. It was the combination of Netflix and Redbox that really sealed the deal for Blockbuster (and video rental generally). It normally takes not one, but at least two things to really fill the full functionality of a old paradigm. Also it's human nature to focus heavily on one thing (Blockbuster was aware of Netflix) but lose sight of getting flanked by something else.
Second: Not listed here is how banks themselves have changed to be almost entirely online, which in many cases is more of a outsourcing play than a labor destruction play. My favorite example of this is Capital One, where the vast majority of their credit card operations literally cannot be solved in a branch. You must call them to say, resolve a fraud dispute. Note that this still requires staffing and is (not yet) fully automated, just not branch staffing. It doesn't make sense to staff branches to do that.
I do not get what's special about banking apps as opposed to online banking. I've been doing online banking in the browser on a PC since before apps and I'm still doing it because dealing with data on a phone is painful compared to a PC.
Is an app really that much easier to use?
Sounds like someone forgetting that for a large number of people, their mobile device is their only computer.
I know this is true, but for serious tasks, I need the screen real estate. I'm amazed at what some people can do from a phone, but also wonder if they're missing things, of if it's actually inefficient.
I'm going to bet that you are a millennial or older? We need our big screens for $IMPORTANT work (buying big things, money stuff, etc.). GenZ tends to be less bothered by it and just does it all on the tiny screen in their pocket. It's time to schedule a colonoscopy.
What if millennials are good at both and are choosing the right too for the job?
Phone is probably the best tool for most minor online banking actions.
Not all.
It's not seen as important enough for others.
Just like with a lot of things. Sure you could do a thing better, faster, more efficiently on a PC, but some people just don't care when 80% is good enough.
My boomer dad does more things on his phone than I do and I'm Gen X. It's actually astonishing how much he does on his iPhone. I'm dragging out the laptop and he's on his iPhone happy as a clam.
I've heard that GenX/Millenials are in a sort of PC goldilocks zone. People older than that cohort don't know computers and therefore use phones for everything, people younger don't know computers and also use phones for everything.
I'm a tech loving boomer, I always use my PC for banking, ordering, etc. My wife, however, almost always uses her cell, which is great for when we are traveling. Even though we're only five years apart in age, she's lite years ahead of me with a cell. I freely admit part of my reluctance for using my cell is the mobile tracking ability of companies.
I used to be with āitā, but then they changed what āitā was. Now what Iām with isnāt āitā anymore and whatās āitā seems weird and scary. Itāll happen to you!
that's kind of an ad hominem, but also beside the point: most bank apps (and websites) are actually absolute garbage, especially the top ones, just one example: the Citi app (on different phones) for a very long time refused to allow me to make a payment or change my password, so i had no choice but to use desktop. Somehow still, top banks' ugly websites seem to allow more functionality/fewer bugs than their mobile apps, which are very often just dumbed-down webviews or simplifications of their websites.
You may have missed that I've included myself in that cohort, being an older millennial. So it's less ad hominem, and more self-deprecating.
I wouldn't call checking a bank balance and initiating transfers "serious tasks". Maybe important but they aren't complex.
What "serious" tasks does banking involve?
I log in to transfer money, to take a photo of a check to deposit it, to check my balance.
All of that is fine on a phone screen. Actually, it's a lot easier to take the check photo.
And a banking app is a whole lot more secure than a browser tab running extensions that might get hijacked, on a desktop OS whose architecture allows this like widespread disk access, keyloggers, etc.
The efficiency of being able to do something at a moment's notice, on the go, anywhere and anytime may outweigh the conveniences of a larger screen.
BTW newer mobile phones offer "desktop mode" (the Samsung Dex, and what came to AOSP), so you can attach them to a TV.
I am going to guess you are 30 or older. Google image search "laptop tasks millennial" to see that this is a feeling shared among our cohort but not the younger cohort.
Or if they go to the public library when those tasks come up.
Do you need it, or do you just feel more comfortable with it?
Exactly. 96% of internet users use mobile phones. 62% use PCs.
Browsers and websites work pretty well on mobile devices too. Website != desktop only
If you consider a website fully laden with ads as working. I have yet to find an ad blocker that works on my iOS/iPad OS that works as well as on my computer. I also hate apps with all of their invasive data hoarding that is much more controllable on my computer. So to me, websites on mobile are broken as they are full of malware vectors that are not present when looking at the same website on my non-mobile device. For me, website === desktop only
If your banks website has a bunch of ads on it, you should probably consider switching banks.
Sure, if you want to be obtuse about the comment, you'd be so cool in how you wouldn't be wrong.
ublock origin on firefox (Android) works great for me. But, I haven't touched Apple in 30+ years, so I have no idea about that ecosystem.
I encourage you to install the dns4eu ad blocking profile on your ios device.
Itās free, itās transparent, you can read the profile⦠And it takes two minutes.
That wasn't true before smartphones, everyone had a computer so they could access the Internet. Except maybe in developing countries - but the article is about the US.
At one point, humans had not stepped on the moon. At one point, we didn't know about antibiotics. At one point....
It doesn't matter what used to be, we're discussing what is now. We now have mobile devices that are much cheaper for people to obtain than a computer. For most, that device is more powerful than a computer they could afford. Arguing the fact that a vast number of people's only compute device is their mobile is just arguing with a fence post. It serves no purpose.
We are not. We are discussing what killed the teller jobs, which happened years ago, not now.
My main reason to go to bank after online was to deal with physical things. Mainly checks and specifically depositing them. Now, I can usually do that with my phone because of the camera. Even if I had a webcam before, I donāt recall the functionality being there. They had check scanners but usually for businesses and my check volume is really low so never made sense to get one (usually came with a monthly fee to have one iirc)
Even now, the mobile deposit limit seems sufficiently low that I still go to the bank with more frequency than Iād like. Luckily, the ATM at the bank has a check scanner now that doesnāt have a limit so thatās usually easier and faster. Itās the daily $5000 limit I hit the most, a single check and put me over it and require a trip to bank. I think the monthly limit is $30000 and that doesnāt get in my way often. I think $5000 is too low of a daily limit. Itās common enough that I have to make a $5k+ settlement with friends/family that usually always has to be done by check. (For curious, This is usually travel that I pay for and we settle up later.)
Less common, but sometimes I need to get a bank check (guaranteed funds) or a money order. Way less frequent is need to get/give cash funds. Usually can use ATM for this unless itās a larger withdrawal or if I need some particular denomination. This whole paragraph accounts for about 1-4 annual trips in any given year though.
My bank decided that the online banking website needed to be more like the app, so now they are both terrible. Basically the entire site is white space on the computer, because everything is centred and dumb down. Input fields for numbers are invisible, they are just a label saying "Kr" and you're suppose to click it and the numerical keyboard on the phone pops up, except it obviously doesn't on the computer.
Paying billed is easier on the phone in the sense that bills in Denmark have a three part number, e.g. +71 1234567890 1234678 where the first is a type number, second is the receiver and the last is a customer number with the receiver. The phone allows to just use the camera to scan the number.
Transferring money is terrible on both platforms, because it's designed to be doable on the phone, meaning having three or four screen, but it gives you no overview. There's plenty of space on a computer for a proper overview giving you the feeling of safety, but it's not used. Same for account overview. Designed to the phone, but doesn't adapt to the bigger screen and provide you with more details, so you need to click every single expense to see what is is exactly.
I've had the same thing happen. Huge buttons, a lot of whitespace, little functionality in the default web version. To deal with stocks and such, the old version is still available somewhere.
Official banking apps are harder to phish than websites. They also tend to keep you signed in for longer, especially once you enable something like FaceID.
Obviously, I've never used every. single. banking app, yet the ones I've used have signed me out of the app just as the web page does. Using FaceID makes it less noticeable, but it is signing me in each time I use it unless I've returned to it within the active session. Otherwise, it's logged out as expected.
I think mobile deposit by scanning a check with your smartphone camera is one piece of it?
I've never seen a bank offer that feature via their website.
Yes, the apps perform better/faster and generally have more UI thought put into them. Overall, lower friction. Often when people need to use their banking app, they're in a hurry, maybe stressed (e.g. in line at a grocery store) so everything the bank can do quickly and with visual assurance helps.
On the premium end of banking, where users generally aren't stressed about money, offering an app is more about catering to however the user prefers to interact.
A small screen and shitty keyboard are friction to me shrug
I'm the same way but we're both posting on hacker news. Many people prefer phones
Something I have on me at all times
Versus
Drive to the bank, wait in line, talk to someone who misunderstands me, fill out a deposit/withdrawal slip, and also if itās not 9AM - 5PM I just canāt do this at all.
You must know most people only have their phones when they are running errands, at work, etc.
> I do not get what's special about banking apps as opposed to online banking.
I use both. In the beginning I used to prefer the web version. I can use my large monitor to see more data and use a full keyboard and mouse. But I have started to use the mobile version more. For Wells Fargo at least, the mobile version is faster to log into because of face ID support. The website requires a lot more clicks and keystrokes. Also, the mobile app makes it easy and possible to deposit checks if and when I get them.
You can deposit checks via the app pretty easily.
The last time I've used a check was close to thirty years ago. I assume ahartmetz's experience is similar.
Many countries have functioning giro systems. The U.S. is just an outlier.
I've never written a check, but I have had to deposit occasional checks. In the last 6 years the only checks I've received were first paychecks at a new job (before direct deposit was set up) and my covid stimulus checks.
I'm in Europe where the situation is different: checks haven't been used in appreciable numbers for 30 years or so. It's all online or paper transfer orders. If you get a pre-filled paper transfer order, you can type (or scan and OCR I suppose) the same data into the online form.
Your grandma doesn't give you a $10 check for your birthday in Europe?
What about manufacturer rebates?
Europe is a big place, but my understanding is that the US is the outlier here and Europe is relatively similar in this regard.
The only time I really saw checks used was when I was a child ~30-35 years ago and my parents used them. I did once cash a check from an elderly relative, but that was very unusual and only happened once. I didn't even know it was still possible to do that, my reaction was more like if someone had handed me a stack of punch cards to run on my computer.
There hasn't been anything an average person used checks for in the last decades in Germany. Except a few elderly people, nobody uses checks and there are no rebates via checks at all.
I live in France and I still have to write a check here and there. Very minor, but still present.
Receiving a check however is even rarer.
To receive money from someone you can just give them your bank account number or if you both have Vipps or similar just your mobile telephone number.
Granny can always give you cash or just send it directly to you account in the same way.
Cash is still fairly common, and manufacturer rebates are basically not a thing. If they were, you'd send them an account number (IBAN = bank ID + account number at bank) to transfer the money to.
In fairness, manufacturer rebates have pretty much (mercifully) disappeared in the US as well as they were basically a scheme to mentally make you account for a lower price you wouldn't end up being rebated for various reasons.
I am in the UK and I have received two cheques in the last year, both for small amounts.
As it turned out, my bank rejected both because they were made out to [middle name] [surname] rather than [firstname] [surname]. Ironically the former is unique (probably) whereas they had another customer with the latter.
The last few manufacturer rebates I have gotten come in the form of a pre loaded Visa card
What's a check? As the saying goes, 'I'm too European for this'.
On a more serious note, the last time I saw a cheque in the UK was my grandfather balancing his cheque book in the mid 80s. It really has been that long since they were in general use in the UK, at least.
Just like with the prevalance of Apple/iPhones, the US banking system is global outlier.
Things you can't do with my banking app you can do with the web site:
- Extract your transactions to excel/csv
- Use OpenBanking
- See all my accounts on screen at once
- Sharedealing
- International transfers
But people are right, banks trust the mobile app more, and realy on it as an MFA device, so even if you use the website you still need the app.
Europeans have checks as well, so that doesnāt really makes sense.
Yep, check deposit was the last reason I might regularly visit a bank (although even before the iPhone, I would use the ATM for that)
One bank I work with seems to have all but given up on online banking and I just have to use their app because online banking will no longer work on Linux (although they don't openly admit it).
I think Android and iOS are safer platforms than PCs and that's why banks want you to use your phone.
> online banking will no longer work on Linux
How? Across multiple browsers?
> I think Android and iOS are safer platforms than PCs and that's why banks want you to use your phone.
This statement fills me with revulsion and rage lol. The only real "safety" involved here is the removal of user agency. I have a lot more trust in a machine I can actually control, secure, and monitor than the black box walled-garden of phoneland.
Your bank's insurer trusts Google's security more than yours, and they must surely (and rightfully) believe that while Google would spy on you, they wouldn't steal your bank account.
That's a much more precise and accurate way to describe the situation.
I've had the same thought. The only major difference that I can think of is the built-in camera making check deposits easier. It may also be that people were just generally using computers more and using the internet more over this same time period, although a lot is that because of smartphones
Yeah, I have been doing online banking since around 1998.
I have refused to install the bank app on my phone because I see no point in it and just downsides in case I get mugged (bad experience in my teenage years)
The 1 check I get a year takes about a minute to deposit at the ATM on my way to work.
How do you scan a check on your PC?
Generally yes the apps tend to be easier to use for most things, especially with a high-speed internet connection. Customers prefer them, banks build them since customers prefer them.
My PC has had a scanner connected to it for over 20 years, and in the mid 00s I was scanning and depositing checks through my bank's website (USAA). Even with modern cameras and fancy smarphone software, the results you get from a PC scan are still much better than taking a picture with your phone.
If you don't have a scanner, nearly all laptops have a webcam built in, and many people have one for their desktop as well.
On top of all that, there's no reason you can't use your smartphone camera to upload an image into a website through the mobile browser. I've done it many times for things. Just this morning I "scanned" a receipt into Ramp by taking a picture with my smartphone in the mobile browser.
You can't invade the user's privacy nearly as well in a browser (which is great for analytics/marketing), so there's a lot of incentive to the app creator to force a mobile app. But I think we should be honest that it's not for the user, it's for the company.
> My PC has had a scanner connected to it for over 20 years
You're basically the only person in America doing this. Tens of millions of folks are just scanning it with the app on their phone and it's objectively a much better experience lol. The resolution of the photo taken on your smartphone is beyond good enough, there's no need to over-engineer something here.
> You can't invade the user's privacy nearly as well in a browser (which is great for analytics/marketing), so there's a lot of incentive to the app creator to force a mobile app. But I think we should be honest that it's not for the user, it's for the company.
I agree with your first sentence, but not your second one.
Banking applications can certainly get more/different data on you from using the app, but the job of the bank is to protect money and to know their customer. Privacy is secondary, of course outside of things like other people knowing your account balance, unauthorized access, &c. That's for the bank, because they don't want to lose your money, but it's also for you because you don't want other people getting access to your money.
Make that two people. I much prefer to slap the rare check on the scanner than fiddle with the phone. My banks "scan the check" part of the app was buggy for a long time, so maybe that jaded me. (~"move closer", ~"move away", ~"increase lighting"...)
> the results you get from a PC scan are still much better than taking a picture with your phone.
The quality of the check images is not as big of a deal as you might think. No one is actually inspecting these unless the amount of deposit is near a limit or the account is flagged for suspicious activity. You definitely do not want to throw away the physical copy until the bank confirms the deposit.
Yes I totally agree. Mainly I threw that in there to pre-empt any "quality" argument that someone might try to use for why native mobile app is needed.
Haven't written or received a cheque in thirty years. But surely you could do it with any kind of digital camera, even a webcam.
Out of interest, do you live in a country other than the USA?
(I'm guessing you are because in the USA they spell it check, not cheque.)
I asked because the USA still seems to be stubbornly check-focused.
Is it? I lived in the US for 20+ years until 2021 and, though there were definitely more checks than I see in Europe now, the frequency with which I used them was approaching zero, which definitely wouldn't qualify as "stubbornly check-focused".
I'm in the US and things are definitely less check-centric than they used to be but I still probably write or receive a couple a month.
I guess there's also a difference between "can use checks" vs "have to use checks" because, aside from rent, I can't recall having to write checks.
Everything else allowed either credit card or direct debit on top of allowing checks.
Both my housekeeper and contractor use checks and, while I could get the bank to "write" them checks, it's easier to just hand them a piece of paper. I've also needed to pay my neighbor something from time to time and it's easier to just write a check. I do also periodically receive checks from various institutions.
I guess to me there's just a big difference between what you're describing (which matches what I remember) and "stubbornly check-focused" as ancestor comment said.
I do find the money transfer options where I am in Europe much easier, though, and they do make checks and PayPal/Zelle/Venmo pretty obsolete too, IMO.
I think that's fair. I do carry a few checks in my travel folder but I don't think I've ever used them in Europe. Do carry some backup US cash.
But in the US, there's probably a general expectation that you can send or receive checks at least now and then. There are often other options but that's probably the lowest friction one even if my bank can send checks if needed, albeit with some delay.
Visioneer paperport!
I wonder if you can use a webcam?
I can do all the same things with my bank with a browser that I can via the app.
It seems like a natural evolution of the technology and adoption rates to me. There was rudimentary online banking in the 2000s, then we saw banks shift to fully online presences in the 2010s. Maybe it wasnāt āthe iphoneā but just the fact that by the 2010s, everybody had a device in their pocket.
Mostly easier in the sense that it is always in your hand already, not at home on the charger on your desk.
No, the article is wrong about the iPhone.
It's the Internet that killed bank tellers.
And you still need bank branches every now and then for various things. Still don't understand how various expansive bank branches are profitable.
It's also not the iPhone given Europe is 60-70% Android
Android market share in Europe is dropping, hasnāt been 70% in a while and itās closing in on 60%.
Best way to get clicks without publishing something of substance is to publish something wrong. If the article was titled "The internet killed bank teller jobs", then people would think "duh" and no one would click on it.
An app on your phone can be more secure as you are using the device itself as a hardware token.
Ever deposit a check via PC browser?
+1, this is my use case as well
I used to do banking on my (touch tone) phone before I did online banking. I still do online banking on my PC because my budget spreadsheet is on my PC, right next to my browser window.
Personally, I don't think this is about banking apps. I'm kinda surprised an article talking about ATMs and teller jobs barely mentions cash, checks & cards and doesn't mention paypal or venmo at all. I used ATMs less when it became less of a necessity to carry cash.
You don't use cash to buy things online. Even in person, outside of brick & mortars, paypal/venmo became in vogue at some point in the past. Those are banking apps in their own way.
Honestly, its overkill. When my MaBook went kaput, i had to start doing everything on my iPhone. Had to get a good mobile documents office suite (Collabora is great ), do all my banking with both mobile apps or desktop browser apps, etc. Its been dfine, i doubt i would use a full size computer for that anymore.
My bank doesnāt allow for zelle access on PC. Otherwise I would never mobile bank.
Yes? Why would I go over to my computer and boot it up and sit down and type in a website when I could just pull my phone out tap tap done?
I mean, this argument isnāt really specific to banking apps. This could apply to any native vs. web app, in general.
Native apps can provide a bit more streamlined UX (e.g. Face ID), while also being able to provide more robust features (mobile deposit).
The downsides are arguably higher development costs / OS compatibility, and having to install a separate app.
I'm always a bit confused in these discussions what is special about banking software of any kind at all. My bank has an app, but other than checking a balance every now and again, the only reason I use it is because it's also my insurance provider and I make claims through it. For actual banking, I don't really do any, through the website or the app. My pay is direct deposit. My purchases are on credit with payment details generally stored with the vendor; otherwise, I have cards or use the numbers. Monthly balance payoff is autopay. I had to go into the website once to set all that up however many years ago I don't remember, but people talk in these threads like they're in their banking apps directly moving money around all the time, actually making payments with the app. Why?
I have a personal current account, a shared current account with my wife, and several savings accounts. It is frequently necessary to move money between these accounts.
Also, here in the UK we don't really use Venmo or anything like that, so normally transferring cash to and from friends and family happens by bank transfer as well.
Doing it on the go via the app is much easier than using the web app through the main OS browser just because the UI is optimized. not a problem with using the web app approach, just that there isnt as much investment in it due to zeitgeist i guess.
Also since you are already using 2FA, you are already on the phone so might as well do basic operations there.
I can also look at transactions in my bed before going to bed so that is nice.
If I need to look at a support ticket or look at transactions more deeply, i still use the desktop approach.
I don't think many people would argue that there shouldn't be a mobile app, just that there should also be a website/webapp way to do it as well if you don't want to install their native app.
Right, I'm going out of my way to avoid inviting Google/Apple and their respective app store surveillance ecosystems into my transactions. I don't even have banking apps installed. I don't understand why so many people are prostrating themselves to this future for minor convenience.
Mobile payments (at least in places where they are executed correctly) are certainly a huge improvement over physically exchanging cash and change. I haven't needed to take out my wallet for years.
I don't see what difference it makes. If you use cash, you draw it at the ATM.
You just need to understand how things are now. Here are few modern smartphone conventions that render banking on an old-fashioned PC totally obsolete:
- Remembering that you need to do banking, but waiting to do it until you're at home in front of your computer. This is impossible now, and if I don't follow the impulse the moment it occurs, the impulse will forever escape into the ether.
- Even the mere mention of needing to observe a URL is often far too scary. Typing one in, or using a browser bookmark is of course, impossible.
- Using a keyboard and mouse. It's just too onerous to use tools that are efficient and accurate. Modern users would much rather try to build a mental map of the curvature of their thumb, so that when they touch their touchscreen and obscure the button they're hitting, they they can reference that 3D mental map to guess at what portion of the screen they've actually pressed. Getting this wrong 30% of the time does not detract from the allure of touch screens.
- Using a normal-sized screen that allows you to actually see a lot of data at once, or even use multiple tabs. Again, this is really unthinkable. Of course it be be completely unacceptable to need to wait to do your banking until you're in front of a computer. It's 2026, and I cannot be bothered to remember to do a task later. But, in needing to always follow every impulse immediately, it doesn't matter that my phone screen only displays a small amount of information at once, or that tabbed browsing is impossible in a banking app. Those inconveniences are acceptable, or even welcome!
I literally can't find where the bookmarks even are on Edge (I didn't care enough to search online).
Autocompletion is my bookmarks collection for frequently visited websites.
I'm based in the rich Western world. Whenever I travel elsewhere, I'm amazed by the cheapness of labor.
Humans would attend a gas station or fetch items in a store. Why? They're completely unneeded, I can do (and WANT to do) that myself.
I always feel sad about these people, trapped in an economic system that forces them into useless labour when they could spend their time learning actually useful skills.
It's weird how you both describe visiting other cultures AND thinking everybody's just like you in the same paragraph.
1. You can fill your own car with gas, but some people can't, or prefer someone more knowledgeable to do it for them. Some people like the comfort of having someone bag their groceries for them, or have disabilities that necessitate it. Some people are old. Today you learned.
2. Your economic system is not different than theirs. Everybody NEEDS a job to support themselves, their families and to be functioning members of society. That means jobs that can easily be automated won't be automated. Also, you may make a lot more money than that kid bagging groceries to make a few bucks for himself, but at least what he does actually helps someone. What we here on Hacker News do is mostly build imaginary products that will be gone and forgotten quicker than you can say "Al Bundy".
3. Not only that, all of us here have basically written our own replacements and made ourselves obsolete. Something tells me your job isn't really needed too.
That labor cheapness is enabled by a cheapness of cost of living. Those things all tend to feed onto each other.
> I always feel sad about these people, trapped in an economic system that forces them into useless labour when they could spend their time learning actually useful skills.
It's useful labor. Yes you could do it yourself, but it gives them a job which they can ultimately use to afford food and where they live.
I mostly only feel bad for kids doing that sort of labor as it means they aren't getting an education. But for an adult? It speaks to something a bit right about their economic situation that they can stay a float by merely fetching items in a store.
I wish in the US that it was possible for someone to make a living doing doordash or instacart.
Some countries prioritize having low unemployment numbers, because they believe that unemployment leads to unrest. Governments can choose to subsidize the cost of labor to achieve this.
Also I think it is preposterous to claim that these people are trapped.
> fetch items in a store. Why?
Because the presence of a human likely prevents shoplifting and / or vandalism. It must make economic sense for the gas station owner to employ a human, and I suppose this is the sense.
What actual useful skill do you think the gas station keeper could learn? Is their employment the thing that prevents them from learning these skills?
> What actual useful skill do you think the gas station keeper could learn?
I mean, it's possible there are useful skills they could learn but there's not the interest or desire to learn those skills. It's completely possible that person is perfectly content doing that work.
It is a different mindset and they are happy with what they are doing. I come from India where there is a ton of that labor. When I lived there, I had a couple of full time house help, supplemented by cook etc as needed. They had plenty of time by themselves. They would genuinely just zone out when they had free time, even significantly long. THey liked the easiness of the job, and the fact that once it is over, it is just over. No need to think about tomorrow, take your work in your head etc. A lot of the world's people are like that, maybe even a significant majority.
I am sure in the rich Western world you also have people who work at a gas station, who fetch items from a store.
Helping someone fill their car with gas or sell them an item is useful as well, not everyone should be a software developer. Before feeling sad for other people, think about yourself as well.
if the rich western world you mentioned is the US, I'd like to remind you that no economy needs that amount of fast food workers
pretty degrading to call what they do useless
we all need to do something
If it makes you feel better, most labor is useless. In the sense that a computer program and/or machine could easily do it, or the customer could trivially do it themselves. But the labor is cheap enough that having a warm body around is worth it.
We've pretty much locked ourselves into an economic system that requires everyone to work, even though our productivity has skyrocketed many orders of magnitude. The end result is most people are doing meaningless work just because they have to in order to survive, and most jobs do not need to exist. This is true even in office work. It usually manifests as moving stuff from A to B and then maybe back to A. Basically, not creating, just moving. And not physically moving either.
TFA reasonably reduces to:
First, ATMs increased the demand for bank branches, which more than made up for the decrease in tellers per branch.
Second, mobile banking decreased the demand for physical branches.
There are ATMs not attached to bank branches. They could have replaced the branches with ATMs before. (I do wonder what bank tellers are doing these days. I mean actual tellers, not investment advisors and jobs like that.)
I still need to talk to a real bank teller before withdrawing $10,000 in cash. Above a certain amount my bank requires an ID in addition to a debit card and a PIN.
Had go to go a branch a couple times in the last year at a local credit union. Largely seems like tellers are getting busy work. There are not a lot of tellers present, and they appear to be doing other things on their workstation. So they get up to go to the teller window and help me out with my request, which usually involves them playing around with some archaic bank app on the teller machine and fiddling with the copier for a bit. A supervisor is always around who knows more of the business use cases and always seems to get involved either out of boredom or because they're the only ones who know how to do something.
They are handling in-person transactions, usually deposits (many who deposit checks manually still don't know how to use the app to do so, or if the branch has an ATM that does deposits).
They are the only way to get non-20 cash in many areas; the ATMs that can dispense other bills are quite rare. And if you want $100 in ones you're going inside.
They're basically bank receptionists for old people who will type details into the same system that the general public has access to. They also handle cash for small businesses (I worked in a cafe during university and we'd regularly have to do runs into town to deposit rolls of bills and get more change to float the till)
If that's all you think tellers are then you're missing out on a lot of opportunities.
They are the first line of human-to-human contact with customers. They are able to sell new services or upsell existing services to customers, especially with the customer's data right in front of them. A new pleasant conversation plus "Oh by the way, did you know that you could get service ABC that would help you?" is something that an LLM or ATM can't do reliably.
There's a tremendous amount of opportunity available with well-trained tellers.
I don't feel the phone conclusion is quite correct, because it's not just the need to use an ATM that has dropped. The need to use a banking app or website has also dropped.
The behavior of companies has changed dramatically. Checks have almost vanished, you can often set up automatic payments, and you can get bank balance notification emails/messages. A large portion of banking interactions are fully automated.
In recent years I have been going less and less to banks. 20 years ago I would go monthly to pay some bills.
Nowadays, I must visit a bank once or twice a year tops. My manager frequently sends me messages, but invariably he is trying to sell me something.
I've noticed that branches have really cut down on tellers and in my latest visit the branch didn't even have a teller, just someone helping people use the ATM and lots of desks (most were empty) for you to handle more complicated business with your account manager.
That paired with an increasingly cashless society. (Which is also in large part to smart phones) Otherwise you'd still need more tellers to conduct transactions that exceed ATM limits.
As far as I can tell, it's entirely that. The things the author cites as how mobile banking supplanted going to the bank (paying for things with debit cards, getting your paycheck direct deposited, etc) have nothing to do with mobile banking. They are all just as you said: we live in an increasingly cashless society, the only reason to go to the branch is to deposit or withdraw money, so the need for tellers has gone off a cliff.
Yes, exactly my reaction. Other than maybe to open an account in the first place, the only reason I ever went into to a bank even in the pre-internet, pre-smartphone era was to deal with cash.
Checks could be deposited in the deposit drop, or later at an ATM. My payroll went to direct deposit as soon as that was possible.
But to get cash, before ATMs, you went into the bank, unless you had check-cashing privilges somewhere else (supermarkets used to offer this). To deposit cash, you went into the bank so the teller could count it in front of you and agree on the amount. It was risker to deposit cash in a deposit drop or ATM.
The move to cashless transactions for almost everything, and the resultant rare need to carry cash, is IMO the main reason why we don't need very many bank tellers anymore.
Something that only came with the banking apps was opening of accounts via camera based identification and other security critical stuff, like 2fa for transfers, resetting card pins and setting other security features.
It's also easier to scan payments via app than go to the bank, something that is only possible via native like apps
In which way is the cashless society due to smartphones? Cards did that already before Apple/GooglePay were a thing.
P2P apps (Cash App, Venmo, etc) that have filled the gaps for transactions that were typically tricky to use cards for.
What is a Bank nowadays. It is nothing. It is a virtual construct and software that we are supposed to put our trust into, where banks have a history of betraying that trust.
I didn't notice any link with the iPhone, except maybe a vague coincidence in timing. Online banking existed before the iPhone, it worked using websites, on personal computers. And it took some time before smartphones were taken seriously by banks.
What I noticed however is a noticeable decrease in service quality in bank branches while online (desktop browser) options became better. Banks pushed customers out of their branches progressively. In the early 2010s tellers couldn't do anything you couldn't do online by yourself. For services like dealing with large quantities of cash, or coins, they made it so that you couldn't do more than what the ATMs allowed you to do, limiting the amount of cash the branch had access to and increasing how much you could withdrew from ATMs.
They didn't get the idea to fire all their tellers when Steve Jobs announced the iPhone. It was a decision at least a decade in the making. It is just that people tend to resist change so it happens slowly, especially for big, serious business like banking. And I don't think it is a bad thing.
That's a really good point. They forced the adoption of these services by kneecapping the tellers, in terms of what they had access to.
When ATMs first came out, they were mostly still only at the branch because they were big machines. I remember in the late 70s/early 80s, if you got a steady check (like social security or a paycheck from a steady job) you could cash them at the liquor store. The liquor store would even run my Dad a tab, and he would pay it off when he cashed the check. On paydays he would not be the only one doing that, they must have had to get a lot of cash on hand.
Eh, bank teller jobs were dying and on their way out long before the iPhone showed up. Back in the early 00s local branches were downsizing left and right. My small rural town went from having three banks with like 4 tellers in each bank, in the mid 90s, to one bank with 1-2 tellers, in the mid 00s.
By the end that bank only dealt with mortgages, other loans, and saving accounts.
Online banking and the rise of card use was a huge reason for that. It is almost 20 years since I last time went to a physical bank to withdraw or deposit money, or pay a bill. Probably even longer for paying bills.
Fun story. There are still bank tellers in the Falkland Islands because there is no e-banking. Transfers are literally made by filling in a piece of paper and taking it to the bank.
I am very very glad that most of the world has moved on from this way of doing things. Such a terrible waste of time on a large scale.
Arent these basically minimum wage jobs? I mean throw a few dollars an hour on top of that, but there are plenty of jobs like this.
Any time I needed anything advanced, I get shuffled to someone else.
> Arent these basically minimum wage jobs? I mean throw a few dollars an hour on top of that, but there are plenty of jobs like this.
Getting rid of them isn't a good thing.
Entry-level jobs are important.
The labor zero hyper-efficiency maximalists arenāt going to like this one.
The author wants to say that atms are a stand in for in person banking experience, while the iPhone changes the paradigm entirely.
Why? Seems like basically the same paradigm to me, I can just do it without going anywhere.
Starting with quotes with JD Vance and talking about listening to him on Joe Rogen is... a choice. Also I fail to see how the iPhone did anything or is relevant at all. Banking apps were made by third parties years after the iPhone came out and everybody had dozens of smart phones to choose from. The reason why they mentioned the iPhone specifically, touch screen and app store, already existed in the form of PDAs long before the iPhone came out.
If I have to physically still go to the bank, it really hasn't disrupted much. The iPhone created an opportunity... the banks investing around the technology is the disruption. ATM itself couldn't unlock as much which I suppose is the paradigm mentioned in the article.
AI is more iPhone than ATM IMO.
I hate the graph here. "Bank teller employment has fallen off a cliff" - well it _looks_ that way but actually it's more like halved from its peak because the bottom of the Y axis isn't zero. That's still a significant reduction, but it's not as dramatic as it seems at first glance.
Lies, damn lies...
I was born in the mid-80s and I've never had a bank teller experience. For me growing up, the bank teller was simply the tech support person for my debit card.
This writing style where every section has multiple paragraphs of preamble, prolepsis, cold openers for cold openers, and tangents is infuriating. Get on to the point already.
In general, it's just multiple times as long as it should be.
I guess the trope in movies of masked bank robbers going in and threatening a scared bank teller will be a thing of the past soon. Pointing a gun at an iPhone doesn't have the same vibe.
Blog says: ATM didn't kill jobs. Okay, it did kill some jobs. Proportionally did, but lots of new banks means overall more jobs. (The relationship management stuff is kind of irrelevant, it was simply the banks took the efficiencies to expand, thus still less tellers per branch, but more tellers overall.) /Completely different technology that didn't have the physical space limitations of ATMs/ then caused branches to decline and then the actual teller decline was felt.
Pretty funny how this is being twisted into what feels like AI booster shillery. Smart people are talking about AI as being similar to ATMs (I prefer the analogy of a spelling and grammar checker in a word processor) or other marginal increasers in human productivity/efficiency. They absolutely will increase productivity. They mean less people can do more. But the the roles don't go away completely because they have clear technological limitations. They spout probably likely text, and straight up lie, and you can't trust 'em. That's a limitation in what they are just like an ATM needs to be in a big metal box and they only dispense cash.
AI can't do the automated firm linked to (to be fair, didn't read that linked substack, as it looked as ridiculous as that other sci-fi fanfic by Citroni Research or whatever it was). Not AI as it is now known, namely an LLM chatbot. /A completely different technology/ might. A technology that might be informed by AI. Sure. Just like I'm sure mobile banking was informed by the technology in ATMs. But we're not calling smartphones with mobile banking apps "mobile ATMs". Because if we were, then you could get away with it. And the future technology that could remove "labor shaped holes" (or however the author phrased it) could be twisted into an AI nomenclature. Just like Machine Learning (ML) got twisted into AI nomenclature. But the iPhone probably didn't need the ATM to come first. It needed things the ATM uses. The next thing could very well use ML. But not enough to be called "AI" except to boosters shills.
Overall, this sounds like the usual AI boosterism that Ed Zitron complains about often. And I agree with his critiques. This article says nothing about how a /new/ technology needs to come about from AI. If it did, it would also have to comment on whether we need to spend insane amounts on data centers and circular deals to get to it. Because my guess is the answer is, no, it takes R&D and a truthful "we don't know what it looks like yet and we can't promise you shareholders when it will come" to get to it.
Ironically the author says the ATM story was used to come up with two incorrect interpretations, and then provides what I feel like was another. Still interesting, if possibly irresponsible in how it frames AI as iPhone--and not the ATM it still feels like. [EDIT: a word.]
I really enjoyed this article, I didn't bridge the idea of an ATM and mobile banking.
I think the idea raised about "Automated Firms" is a bit off in the picture painted in that linked article. I think the David Oks intention is to paint a picture of a fully automated company, but the linked article gives this impression:
> Future AI firms wonāt be constrained by what's scarce or abundant in human skill distributions ā they can optimize for whatever abilities are most valuable. Want Jeff Dean-level engineering talent? Cool: once youāve got one, the marginal copy costs pennies. Need a thousand world-class researchers? Just spin them up. The limiting factor isn't finding or training rare talent ā it's just compute.
In that above paragraph the author is saying to the reader that a human will be able to spin up and get these armies of intelligent workers, but at the end of the day their output is given to a human who presumably needs to take ownership of the result. Intelligent workers make bad choices or bad bets, but those AI machines cannot "own" an outcome. The responsibility must fall on a person.
To this end, I think the fully autonomous firm is kind of a fallacy. There needs to be someone who can be sued if anything goes wrong. You're not suing the AI.
That is why a fully automated firm would be a paradigm shift. Instead of requiring someone to be responsible and to QA things, you just let AI systems be responsible internally, and the company responsible as a whole for legal concerns.
This idea of an automated firm relies on the premise that AI will become more capable and reliable than people.
In this regard, the company cannot be created where there is not a single person tied to it, at least legally, even shell corporations have a person on the record as being responsible. So there needs to be some human that is apart of it, and in any "normal" organization if there is a person tied to the outcome of the company they presumably care about it and if the AI 99.99% of the time does good work, but still can make mistakes, a person will still be checking off on all its work. Which leads to a system of people reviewing and signing off on work, not exactly a fully autonomous firm.
The benchmark is AI making less mistakes than humans, not making no mistakes. Just like autonomous vehicles.
And yes, presumably there would be a person who set the firm up, or else our legal system would need to change quite fundamentally.
Also, employing āinfinite intelligenceā by splitting it into āworkersā and organizing them into firms cannot be farther than a paradigm change.
Itās strictly an attempt to shoehorn the new tech into an existing paradigm, just because right now the system prompt makes an āagentā behave differently than the one with a different prompt.
Itās unimaginative to say the least.
Yeah, I think if there is some sort of super intelligence, the idea would be that it would make the system of computers and computation irrelevant entirely. Now that would be novel.
Correlation is not causation.
There is no clear link to the iPhone causing lower teller employment.
This article does have a glaring omission: The 2008 financial crisis effects on the banking industry in general. When there are fewer local banks there are naturally fewer tellers employed. Bank failures peaked in 2010 in the aftershocks of the crises, which lines up nicely with the articles timeline.
yeah weird. Same goes for the "ATMs increased demand for tellers" strange idea suggested earlier in the article, which was automatically disproven right there by actually attributing the growth in tellers to deregulation. Which one is it?
This seems like a fluff piece. The tl;dr is that mobile banking (not the "iPhone") is what "killed" bank teller jobs. You can add online banking, credit cards, debit cards, and all other cashless payment options to that too.
There is also a premium for the human touch. I currently pay $15 fee to my bank a month. Going rate here for a bank account is $0.
But the $15 bank has a call center that is dreamy - reliably connected to a competent focused individual in under 3 seconds.
It doesn't matter how good the tech & automation is I place an economic value on that ability to pick up the phone and talk to a human. LLMs are crushing it but I'm not fuckin paying $15 for an LLM.
I didn't see the article mentioning how banks forced people to use ATMs or apps instead of tellers by having "green" accounts. where you would get a monthly account fee waved if you didn't go in to a branch.
Right around when my local credit union began requiring (IMHO insecure) 2FA, I coincidentally moved right next door to a branch location.
Since I refuse to implement their "security" "feature," I just walk into their office every time I need a simple balance inquiry/transfer. They probably hate that I have just enough money deposited to consider my inconveniencing them profitable.
Worth the $1.00 monthly "in-person banking fee"
Everyone I knew working as a bank teller quit because the actual job is screwing over old people with bad performing and long lasting investments. My bank calls me at least once a year to tell me my personal bank teller changed again.
A personal banker and a bank teller are not the same thing. I think you're conflating or confusing two different professions.
the line is being blurred as the need for tellers goes down many banks have the tellers performing personal banking adjacent tasks, like selling products, accounts or other upsells to existing customers
Everyone I knew working as a bank teller quit because the actual job is screwing over old people with bad performing and long lasting investments.
Thatās not a bank tellerās job, at least not in the U. S. Youāre confusing that job with something else.
Bad performing and long lasting you say?
If you are implying that the two are contradictory, allow me to introduce you to annuities.
Based on the fact that we've had ATMs since the 1970s and bank tellers didn't fall away until the 2000s, the correlation isn't there regardless of the causation.
The interesting takeaway is that automation rarely removes jobs inside the existing paradigm. ATMs automated a task inside branch banking, so banks just reorganised labour around it. Smartphones removed the need for the branch entirely.
I mean, there is definitely a turndown period in labour force when a new tech is introduced, but it will defintely produce more jobs tho, as an evolution of human history. <3
Many banks wanted their branches to become like Apple stores where it's self serve even though that's not what an Apple store is.
Uhhh... if it's 'mobile banking' that killed teller jobs, what does the iPhone have to do with anything other than clickbait? (I guess I answered my own question)
For better or worse, the iPhone kickstarted the mobile revolution.
This must be an amerilard phenomenon. Thereās no way the number of bank tellers has remained constant in the western world. I havenāt been to a bank branch in 10 years.
The graph showing that "Bank teller employment has fallen off a cliff" is not zero based. This is pretty damn bad. The graph looks like it's going down 90%, but it's actually going from 350k to 150k. That's a ~60% drop which is a lot, but not "falling off a cliff".
60% is pretty well in āfalling off a cliffā territory. The graph is misleading but that phrase, to me, is not.
60% job loss is not off a cliff?
That huge job loss also means no hiring. If you were a bank teller you would seriously need to consider a job switch
Probably a bigger sign to look for would be average age of bank tellers vs other occupations. If it's trending higher, then it's likely just people who've been doing the job for a long time and serving other older customers. I have a feeling not many young people are becoming tellers or even needing their services, but I can't verify it.
> an AI system is literally a machine that can think and do things itself
why do so many writers claim this as a matter of fact? are we losing (or did we never have) a shared definition of the word "think"? can an LLM, at this time, function with zero human input whatsoever?
edit to add: these are genuine questions, not meant to be rhetorical :)
it's hard for me to gauge a broader understanding of AI/LLMs since most of the conversations i experience around them are here, or in negative contexts with people i know. and i'll admit i'm one of those negative people, but my general aversion to AI mostly has to do with my own anxiety around my mental health and cognitive ability in a use-it-or-lose-it sense, along with a disdain for its use in traditionally-creative fields.
>are we losing (or did we never have) a shared definition of the word "think"
People have been saying, āthe computer is thinking,ā while webpages are loading or software is running for as long as Iāve been consciously aware. I agree thereās something new about describing AI as, āliterally a machine that can think,ā but language has always had fuzzy borders
It's wild to watch documentaries from the 1980s where a primitive computer is said to be "a thinking machine" that is "taking most of the work out of a job".
yeah, for sure. i really think some people are under the impression that LLMs are a form of general AI that actually processes thought rather than being an admittedly-impressive exponential autocomplete.
though i'm not by any means an AI booster, my question wasn't really meant to be taken as a gotcha - more a general taking stock of where we're at in terms of broader understanding of these technologies outside of the professional AI/hobbyist world.
Not sure itās great to start this with jd Vanceā¦