I returned to AWS and was reminded why I left

(fourlightyears.blogspot.com)

782 points | by andrewstuart 2 days ago ago

548 comments

  • AJRF an hour ago

    AWS / GCP / Azure aren't for individuals or small businesses. They won't tell you this anywhere, and they won't stop you from signing up - but they simply do not care one iota about users with anything less than $100k billing per month.

    They treat big account owners like kings, they fly them out to Formula 1 events, they get 3 day workshops in swanky retreats, because a few k spent on this equals maybe millions of dollars.

    If they respond to a small business quicker they don't get anything from it. They collect a bill that if it went missing they wouldn't notice.

    I am not saying this is right - but people running small businesses on these platforms are operating under false pretenses.

    • throwaway2037 35 minutes ago

      All good points. What do you recommend for the "small potatoes"?

      • AJRF 8 minutes ago

        Hetzner. You won't get much better support, and you will have to deal with EU legislative nonsense, but it won't be hard to do that if you are actually small.

        Why do I say Hetzner?

        It's very budget friendly and has a very small list of services so its much harder to screw yourself into a situation where its hard to leave.

  • tedivm a day ago

    > AWS stomped on open source projects - despite the clear desire of projects like Elasticsearch, Redis, and MongoDB not to be cloned and monetized, AWS pushed ahead with OpenSearch, Valkey, and DocumentDB anyway, capturing the hosted-service money after those communities and companies had built the markets; the result was a wave of defensive licenses like SSPL, Elastic License, RSAL, and other source-available models designed less to stop ordinary users than to stop AWS from stripping open-source infrastructure for parts, owning the customer relationship.

    This is completely backwards, at least with OpenSearch and Valkey. AWS didn't create the forks until after the upstream projects changed their license, so it's really weird to say that the forks "resulted" in the license changes when those forks where a response to the license changes. With Valkey in particular it was members of the former redis core development team that created Valkey.

    • hankerapp a day ago

      A lot of these projects work on a business model where they open-source their core product, and provide advanced services, installation, maintenance or fully-managed services around their product. AWS was bypassing them by providing fully-managed services. On this, I am on the side of the people behind the projects. Basically AWS was eating their lunch. They had no choice but to change the licenses.

      • rpdillon a day ago

        They have a problem with their business model, then. License changes to a formerly open source project are costly. The community reacts very strongly when license terms change after they've come to depend on a product, and they should.

        Why do we apply this standard to MongoDB but not to Apache, Linux, Postgres, or MariaDB? One purpose of an open source license is to allow many providers to provide the service. As I've talked about here previously, Elasticsearch wasn't able to provide the service I needed, so I had to move to AWS.

        It's weird to me that the Hacker News community doesn't think that sort of competition is good. The narrative seems to be that all these businesses are somehow victims of AWS, when it seems the truth is much more straightforward: they provided open source software and people used it. The fact that their business had no working plan to actually monetize that foundation should not be taken out on the community.

        • maest 8 hours ago

          > It's weird to me that the Hacker News community doesn't think that sort of competition is good.

          Negative externalities. The company makes money using a free resource and disincentivises future development.

          I'm sure you can see why killing the most popular business model for open source companies is bad for the ecosystem, right?

          • alexey-salmin 7 hours ago

            I can't? I mean, if Amazon does commercial version of Elastic better than Elastic themselves then so be it. I don't see how one company is entitled to turn an open source project into business and the other is not.

            I do see issues with monopolies pushing inferior products onto users. But that would be a completely different issue, nothing to do with open source.

            • gcr 22 minutes ago

              Don’t you see how bankrupting the Elastic devs pushes an inferior product onto users?

            • dns_snek 5 hours ago

              > I don't see how one company is entitled to turn an open source project into business and the other is not.

              According to the original license they are both entitled to do that, that's the problem. Do you think it's sustainable for one company to make the software for free and another one to sell it for profit?

              • alexey-salmin 2 hours ago

                > According to the original license they are both entitled to do that, that's the problem.

                I really don't see how Amazon is to blame for this problem, they weren't the ones who picked the license.

                > Do you think it's sustainable for one company to make the software for free and another one to sell it for profit?

                They both sell it for profit, let the most profitable one win.

                • gcr 20 minutes ago

                  They both sell it for profit, but Amazon doesn’t contribute changes upstream, so the public + rest of the industry won’t benefit from their work. It’s not an equivalence.

              • Orygin 2 hours ago

                Why isn't this a problem for other databases then? I'm sure most cloud sell some MariaDB services. Why would they be able to profit from it?

                It's because the business model for ES is direct competition with AWS and others, and they got out competed. So they had to play licenses games to try and level the field.

            • JambalayaJimbo 5 hours ago

              I mean it’s a free country either way then. Elastic can change the licensing and Amazon is then free to compete with a fork of the software pre-licensing change.

              Amazon doesn’t really have a leg to stand on in objection here. Building a platform to re-sell an open source project may end up fracturing that open source community’s user base, that’s a consequence of their own actions.

        • lmm 10 hours ago

          Selling support/services as the maintainer of an open-source service was never a hard-nosed business proposition in the first place. It's like Amazon undercutting your fire station's bake sale.

          • jychang 9 hours ago

            Yeah, I'm genuinely concerned that members of society can't seem to understand this.

            More and more people are just focused on making a quick buck.

            I'm getting a feeling that these people would gladly rip off a lemonade stand, and then defend themselves by saying the lemonade stand deserves it.

          • pentacent_hq 6 hours ago

            This is such a good analogy, thank you!

        • ipaddr a day ago

          Competition would mean Amazon creating their own software. Taking software others made and using your monopoly eco-system and scale to drive the original creator out of the game kills the product.

          Many support breaking up Amazon so others could compete not killing small entities and growing Amazon.

          • skinfaxi a day ago

            > Taking software others made and using your monopoly eco-system and scale to drive the original creator out of the game kills the product

            They took software that others gave away for free without restriction and did what they wanted with it. It took time but the community figured out this exploit path and patched it in subsequent license versions.

            • wvh 2 hours ago

              One could argue it was not given away for free, but with a silent expectation of reciprocity. Using open-source is a gentleman's agreement to be respectful towards the project, a good citizen, not to abuse and potentially contribute.

              But you're right communities are now having to concoct a wild-growing collection of semi open-source licenses to protect themselves from abuse by a few big players.

            • mschild 2 hours ago

              Form a legal standpoint, you're correct.

              From a moral/ethic one, its still shit.

              You're legally allowed to do a whole lot of things. You can still be called an asshole for doing them.

          • jeremyjh 19 hours ago

            They knew what they were doing. They released OSS to build traction and a community. In some cases, the community contributed quite a lot to the quality of the software - even if not a lot of code. It never would have gained any traction or interest from enterprise buyers without that. Then that valuable software they had already given away was used to build a business that couldn’t create enough value on top of it.

            The only people with any justification for hurt feelings are the community contributors.

            • tedivm 18 hours ago

              AWS literally paid for developers for the redis project, including the salary of core members. It's not like they didn't contribute back to the community.

              • jeremyjh 18 hours ago

                They pay for a lot more open source work than that as well, but they also don't get to make any special claims for doing that. None of it is charity - it is simply in the collective interest of a lot of tech companies to commoditize and share the costs of infrastructure software. Even shaming freeloaders is uncalled for and against the ethos of OSS, which is sort of implied in making your statement.

          • rpdillon a day ago

            It's not just Amazon, it's also smaller providers like Dreamhost, which I've been using for 20 years. I feel like people are in favor of killing the hosting ecosystem so that we can support businesses that didn't have a working plan to monetize their open source offering.

          • ThrowawayR2 21 hours ago

            That's a risk they knowingly chose to accept when they opted for FOSS licensing. It's not as if people hadn't asked "Well, what if another party tries to fork our open source code for profit?" all the way back when FOSS was starting to gain traction in the 1990s.

            • pessimizer 20 hours ago

              OSS licensing.

              Free Software was designed to avoid this, and has become stricter as the technology changed. Open Source was deliberately designed to thwart this. The entire intention of it was to allow businesses to resell work that was done for free. When you fork Free Software, your fork is also Free Software.

              • pabs3 9 hours ago

                Free Software licenses don't restrict profit making, even the AGPL wouldn't stop Amazon from using the same strategy to beat those OSS companies in the market.

                • disgruntledphd2 4 hours ago

                  Yes, but at the very least, Amazon would need to contribute their code back, so it's not a complete loss.

                  • pabs3 2 hours ago

                    That is incorrect, the FSF licenses would require Amazon contribute code forward to their users, not back to the project.

                    Also, Amazon were already contributing code back when these companies changed their licenses, the companies don't care about code contributions, just money.

          • kikimora 19 hours ago

            Original creator business model relies on extracting free labor from community. It backfired and they changed the license. They abuse contributors by betraying their trust and changing the license after AWS abused their business model. No good guys here.

          • andrepd 20 hours ago

            There's a lesson there then, isn't there? Use GPL

            • jlokier 19 hours ago

              The GPL has no effect on this issue. For service providers like AWS, who provide the service not the software, the GPL doesn't require them to do anything differently than with more permissive licenses.

              • somenameforme 3 hours ago

                ++

                I think the GPL has become somewhat obsolete because of this causing it create to completely nonsensical scenarios. For instance I can't comply with the GPL and add vanilla Stockfish (the currently strongest chess engine, licensed under GPL) to a chess app released on the Apple store, yet somebody can slightly modify the engine, keep all those modifications proprietary, and sell access to the engine on the same App store, without source access, so long as the computer is done through a middle-man server instead of being done locally.

                The GPL no longer suffices to maintain the spirit of intent of the GPL. Like a peer comment mentioned it seems (??) that AGPL is their update to resolve this.

              • andrepd an hour ago

                AGPL, it is implied.

            • temp8830 19 hours ago

              *AGPL

        • sandeepkd 7 hours ago

          There are passive open source projects done by people out of love in their spare time over the years and then there are active open source projects done by people with the idea of executing in the open space and building a community around it. The later has business incentives tied around it and I guess the challenge is that there isnt a clear structure which leads to this situation.

        • bluegatty 8 hours ago

          "It's weird to me that the Hacker News community doesn't think that sort of competition is good."

          It's not 'competition'.

          It's carnivorous, predatory.

          Consider shifting gears and seeing all of this through the lens of 'power'.

          There is no such thing as open/free markets when there is massive power asymmetry.

          Anything that a weaker entity produces, will be 'taken' by a more powerful entity via all sorts of mechanisms.

          The 'point' of IP/Open Sources liscencing can be whatever anyone wants it to be ...

          but consider this: if the 'game' is on a tilted field, then almost all of the economic value goes into the hands of those with the power to reap the surplus - not the creator.

          The 'owner' is who has power.

          The Kings didn't rule by arbitrary decree - their money came from owning all the land. It doesn't matter how hard you work, how hard you innovate, how much surplus you create - if the landlord says 'I want all of that' and you have no choice.

          Your Rent = All The Value of the Stuff You Create with a bit leftover for you to survive.

          That is entirely done through legal ownership - not through some kind of forceful cocercion.

          Control of distribution, access to financing, entrenched supplier / buyer relationships, barriers to entry, regulatory capture, economies of scale - all of that makes some systems unassailable without some degree of power.

          Purely through the lens of power - Open Source is like 'commoditizing' a tiny little part of the system, where the surpluses will get pulled in by the most powerful entity.

          In this case: Amazon.

          Anyone writing software and 'making it free' - that Amazon can use - is working for Amazon for free.

          Again: if you want to see it way.

          If you just like 'making stuff' that's perfectly fine as well.

          But - the moment you see this as a 'means to income' - then - it's a 'power dynamic'.

          This is why better/smarter IP laws should help smaller players.

          The whole point of these things is to try to enable actual competition - which is not 'feed David to Goliath' - its supposed to give David a chance.

          The 'changing of license terms' by some small vendors is the result of Amazon suffocating them - it's the power system finding it's 'equilibrium' - where the 'creators' are snuffed out - or 'better yet for Amazon' keep working for free.

          • ivell 5 hours ago

            And for society as a whole, we are getting to a state where corporations have incredibly large amount of money and gradually, hard power too. OSS is kind of small rebellion that we need to sustain so that we don't that tiny bit of freedom we have.

            P.S. I think East India Company's history should be a mandatory lesson for everyone on the ability of a single company to take over a subcontinent. At its peak they had their own army, ruthless efficiency due to a largely meritocratic structure, and was successful in taking over multiple kingdoms.

        • hunterpayne 13 hours ago

          "They have a problem with their business model, then"

          Ok, then don't be surprised when the most popular license becomes the FairSource license. Under this license, you have no rights, no ability to fork and no ability to modify, no ability to legally change the software in any way, but hey...you can see the source right. I feel like you don't understand the tragedy of the commons somehow.

          • bornfreddy 7 hours ago

            That's a huge misrepresentation of fair source licenses. They prevent competing with the original vendor, but still try to retain Right to Repair as much as possible, for example:

            > The Fair Core License, or FCL, is a mostly-permissive non-compete Fair Source license that eventually transitions to Open Source after 2 years.

        • swasheck 21 hours ago

          agreed. i’m no aws apologist but if you’re going to try to monetize open source and then complain when someone else does it more efficiently/effectively, it really feels disingenuous. “we were going to do that, but they got there first. it’s not fair.”

          i’m only familiar with the postgres side, but it seems like a more nuanced view of this debate would be to discuss aws monetizing open source relative to their upstream, community-beneficial contributions.

          • dijit 21 hours ago

            Honestly, this is so divorced from reality that I'm curious if you've ever actually spoken to a CFO before.

            • swasheck 21 hours ago

              please educate instead of insult. happy to hear your response. that is why we’re here, after all.

              • dijit 20 hours ago

                Sure. CFOs optimise for fewer vendor relationships; fewer invoices, fewer things to talk about during compliance, less reconciliation overhead. Consolidated spend also improves their negotiating position. So when AWS offers good-enough Elasticsearch bundled into an existing relationship, it wins regardless of whether the original is better supported or better value.

                "More efficiently" means procurement efficiency, not operational efficiency. They're not the same thing.

                • tikkabhuna 2 minutes ago

                  As someone who has had to deal with vendor management at a financial services company, I couldn't agree more.

                  We were going through a process to make vendor management more standardised and it reached a point where we couldn't even consider adding new vendors.

                  Adding new services to an existing vendor had minimal paperwork and approvals. As long as you had budget for it, you're unlikely to get any push back.

                  New vendors required tons of back and forth with legal. Infosec reviews. Additional costboards. Having to justify the vendor to multiple groups. Working out how you get them onboarded into the finance system. Once they're onboarded, we would then have additional paperwork to do periodic reviews to rate the vendor and make sure they're not a critical dependency that will bite us in the ass.

                  I've only worked with AWS and GCP, but they also throw training and credits at us, too. This could be personalised 2-day classroom events just for our company. There's a huge amount of perceived value for funnelling money through a cloud provider.

                • swasheck 18 hours ago

                  thank you. really appreciate that insight.

        • cyanydeez a day ago

          Walmart pulling up top a small town, opening a single business, paying everyone minimum wage is not 'competition is good'.

          Just try a little bit of understanding.

          • rpdillon a day ago

            This feels close to "felony contempt of business model".

            https://www.eff.org/deeplinks/2019/06/felony-contempt-busine...

            We are supportive of 3rd party ink cartridges, and there's little concern for the business model of the printer manufacturers. We instead care about the rights of the folks using the printers.

            With Postgres, no one bats an eye that there are thousands of hosting companies providing Postgres as an offering, and they give nothing back to the project. Same with Apache, Nextcloud, Linux, Nginx, Sqlite, and thousands of other pieces of open-source software. Are folks against hosting companies like https://yunohost.org/?

            It's only when (1) the software is open-source, and (2) the entity behind it doesn't know how to sustain itself with open-source, that we suddenly change positions and view the project as a victim. This doesn't happen with printers, it doesn't happen with other open source software. I'm not even against a change in the license, but claiming that AWS is evil for doing this doesn't track.

            • gobdovan 21 hours ago

              A lot of those projects are not companies selling software. They're effectively public infrastructure projects, often governed by non-profit foundations or community institutions.

              Also, many of them predate hyperscalers and developed governance/economic structures that make them harder for AWS to capture or destabilize, whereas AWS free-riding a vendor-controlled project can destroy the economic engine sustaining the project itself.

              Quite ironically, the only example from your list that doesn't predate hyperscalers (Nextcloud) is fundamentally a self-hosting/federation product. It exists largely as an alternative to hyperscaler-native platforms, not as a cloud primitive AWS can easily commoditise into its own stack.

              So, treating PostgreSQL, Linux, Elasticsearch and Nextcloud as interchangeable "open source projects" ignores the completely different institutional and economic realities behind the projects.

              • rpdillon 21 hours ago

                Indeed! I just don't think it's on Amazon to fix those institutional and economic realities when they decide to host a project that people find useful.

                • hn_go_brrrrr 7 hours ago

                  It's on Amazon to consider the second-order effects of their actions. They may in some cases be killing the golden goose.

            • Dylan16807 13 hours ago

              If printers were free, and ink was free or open, and the printer company said "don't operate a printer leasing business, that's the only thing you can't do", I would side with the printer company.

          • tonyedgecombe a day ago

            Maybe it is for the consumer. When Aldi opened in my nearest town my food bill dropped by 20%.

            • gobdovan 21 hours ago

              That's the desired outcome of competition but the effects can go all over the place and the second-order effects in fragile towns can matter more than the price drop. As an extreme example, some people may lose their jobs, local spending may fall, some small shops may close and Aldi may pull out too, so everybody loses (here's [0] as an approximate example).

              Usually a community can tolerate changes only when it's not already near the bottom. When you're near the bottom, almost any destabilisation can kill your little system.

              [0] https://www.fox32chicago.com/news/aldi-closes-west-pullman-c...

            • cyanydeez 19 hours ago

              Aldi is a grwat example of a socially discipled capitalism.

              • throwaway2037 26 minutes ago

                Can you explain you intent behind "socially" in your comment? I don't understand it.

          • surajrmal a day ago

            Arguably the town is at fault for choosing to permit Walmart to open in their town in that analogy. If you want to control the negative externalities of capitalism you can't just expect to not provide regulations and hope things will work out.

            Even if it weren't AWS, someone else with enough determination could use the same open source code to create a compelling alternative taking away business from the original authors. Trying to use social norms to make people not do that is not effective. You need mechanisms that can be enforced via legal procedures to be effective.

            • cyanydeez 19 hours ago

              the grift economy is demonstrating that throwing money is all you need to do to get a permit.

      • mpyne 21 hours ago

        > They had no choice but to change the licenses.

        Then why did they advertise themselves as open-source efforts when they weren't? They should have been the best possible providers of managed service offerings given they wrote the software they'd be managing, no?

        Why are monopolies OK here but not elsewhere? Choosing a hard-to-win business model is not supposed to be a choice that guarantees you business income.

      • skywhopper a day ago

        Just because they picked a bad business model doesn’t mean they deserve to avoid competition. Don’t give away your source code if you don’t want someone else to provide hosting.

        • harrall 21 hours ago

          You’re not wrong but if people didn’t, all our companies would be using Oracle and Microsoft SQL Server and paying Larry Ellison instead today.

          • mpyne 20 hours ago

            PostgreSQL is doing fine even with AWS having a multitude of hosted offerings.

            Maybe the business model / community-governance model does matter after all...

            • petepete 20 hours ago

              PostgreSQL doesn't have a pro/officially supported version though.

              • mpyne 13 hours ago

                Sure they do.

                AWS RDS, Azure Database for PostgreSQL, etc. are all "pro" / "officially supported" deployments of PostgreSQL.

                On top of that the PostgreSQL official website even lists a whole table full of vendors from whom you can get commercial support at https://www.postgresql.org/support/professional_support/nort...

                Bringing faux open source into the world isn't a justification for adopting an infeasible business model and then complaining that your business doesn't compete very well.

                • petepete 3 hours ago

                  Obviously I mean PostgreSQL themselves don't have one, like Redis, MongoDB etc.

                • hunterpayne 11 hours ago

                  Enjoy all new OpenSource projects being open in name only then.

                  • mpyne 8 hours ago

                    I would argue that was precisely the issue with Redis and its friends. As a rule they want to get credit for being an open source project and contributing to the global commons, but without actually contributing to the global commons.

                    I'm not going to knock people for charging money to write proprietary software. If that's how you want to approach business dynamics as a software author, then by all means.

                    But trying to make money by extracting rent through a proprietary hold on your "open source" property, even as you claim to be open source, is too cute by half. Which one is it? The OSI definition hasn't substantially changed since the 90s, it's not like people can act surprised by what counts as open source.

                    There are ways to try to make money from open source, but they often involve leaning into the commons aspect and then offering a proprietary license as a relief valve for organizations not ready to have to pitch in, but who would be willing to offer up money instead.

                    Absent that, if you're literally going to be outcompeted on a business perspective on software you wrote, I can scarcely imagine what to tell you.

        • MSFT_Edging 18 hours ago

          Don't take advantage of open source projects if you don't want to be targeted in their licensing.

      • colechristensen 11 hours ago

        A lot of these projects started as community driven and funded open source efforts that eventually the creators decided to make a professional services company as a sponsor -> that company takes over nearly all of the development -> they relicense when they realize the funding they raised isn't going to be paid back.

        They're all just rug pulls when the creators want to get rich off of their open product and realize they can't after raising tens of millions.

        I'd have a lot more sympathy if the story wasn't "closing an open project so we can pay investors"

    • ceejayoz a day ago

      > it's really weird to say that the forks "resulted" in the license changes when those forks where a response to the license changes

      But those license changes were a response to how AWS was monetizing their work in ways unsustainable for the upstream projects.

      • embedding-shape a day ago

        > But those license changes were a response to how AWS was monetizing their work in ways unsustainable for the upstream projects

        Or seen from the other side, these projects chose initial licenses that didn't fit with their wants for how others should use their project, in this mind.

        If you use a license that gives people the freedom to host your project as a service and make money that way, without paying you, and your goal was to make money that specific way, it kind of feels like you chose the wrong license here.

        What was unsustainable (considering this perspective) was less that outside actors did what they were allowed to do, and more that they chose a license that was incompatible with their actual goals.

        • ceejayoz a day ago

          The situation changed. A license that's the right choice at one point may not be the right license a decade later.

          • ncruces a day ago

            That's fair, but forking the FOSS version is also an adequate response.

            • ceejayoz 21 hours ago

              Yes. But so would financially contributing to the folks who did the work.

              • tedivm 18 hours ago

                AWS literally did that. They paid for full time developers to contribute back to the redis code base, including core redis developers. If you actually look at the redis code base the majority of it was written by people who never worked for redis.

                • cylemons 10 hours ago

                  > If you actually look at the redis code base the majority of it was written by people who never worked for redis.

                  Thats a really big deal, how did they legally managed to do the license change? I was under the impression that only works if the original owner is the doing most work

                  • sakjur 5 hours ago

                    Permissive licenses don't protect against projects that decide to change the license when releasing a new version.

                    Copyleft protects against that as a general rule. However some projects that rely on copyleft require contributors to sign license agreements granting the project owners a more permissive license.

                  • jen20 2 hours ago

                    > Thats a really big deal, how did they legally managed to do the license change? I was under the impression that only works if the original owner is the doing most work

                    Almost all of these license changes just change the terms under which _new_ work is contributed - which is why many of them have forks from the last OSI-licensed commit.

              • ncruces 18 hours ago

                Sure.

                Since they're a for profit entity, they'll do whatever they think offers the best cost/benefit.

              • chii 21 hours ago

                If those folks wanted money for their work, they should be charging a price for it.

                • ceejayoz 20 hours ago

                  That’s what they eventually did, yes.

                  But it’s ok to be voluntarily grateful for hard work.

                  • sumedh 13 hours ago

                    > But it’s ok to be voluntarily grateful for hard work.

                    You don't become a billionaire using that approach though.

          • jabwd 12 hours ago

            I hate Amazon and monopolies, but I hate companies that think opensourcing their code as a marketing stunt gives them more rights or whatever. If you don't want to opensource, then don't?!

            • sincerely 10 hours ago

              I can’t agree more, this “our software is open source but we have unwritten rules about how you can use it or we’ll attempt to shame you” attitude is absurd

          • embedding-shape a day ago

            Agree, as long as existing contributors agree the license should be changed, projects should feel free to do so, no harm, no foul.

        • tonyedgecombe a day ago

          I’m not sure any open source license is going to help when you can ask Claude to clone an application in the language of your choice.

          • pessimizer 20 hours ago

            If Claude looks at the code when it does it, then you can still sue them. I don't think there's a "Claude Clean Room" product that trains on everything except the code you might be accused of copying.

            I can't just translate Harry Potter to Spanish and sell it.

      • jgalt212 a day ago

        Yes, this was my impression as well.

    • 2ndorderthought a day ago

      Sometimes I wonder how much it would hurt Amazon to pay the creators and maintainers of OSS software they sell 1 cent per billing period of use(1 hr?). I also wonder how much money that would offer an oss team. To contribute risk free to improving the product

      • richwater a day ago

        I think you would be surprised how many commits in OSS comes from paid workers of the various cloud companies and tech companies out there.

        • ninjagoo 16 hours ago

          > I think you would be surprised how many commits in OSS comes from paid workers of the various cloud companies and tech companies out there.

          And I think the value that various cloud companies and tech companies derive from open source by far exceeds their contributions to it. When you add in the economic contribution, those OSS value-adds are an order of magnitude higher.

          According this Harvard paper [1], the cost to create wide used open-source software once is about $4 Billion. The replacement value to firms that use OSS, if they had to build or buy the equivalents themselves, is about $8.8 Trillion. The Software spending effect (how much firms would need to spend on software without OSS) is 3.5x.

          According to this EU study [2], EU companies invested about €1B in OSS in 2018, but in return the impact on the European economy was estimated €65B–€95B.

          [1] https://www.hbs.edu/faculty/Pages/item.aspx?num=65230 [2] https://opencommons.org/images/c/c1/The_impact_of_open_sourc...

          • troad 10 hours ago

            > And I think the value that various cloud companies and tech companies derive from open source by far exceeds their contributions to it.

            Isn't that how literally all economic exchange works? Why do you think your boss pays your salary?

            If the argument is that Amazon should invest 110% of their OSS-derived profits back into OSS, then OSS ceases to have any value to them. They would simply write their own closed-source software, which would be trivial for a company of Amazon's size, and we'd all be poorer off for not having OSS. Getting one percent of someone's profit is better than getting zero percent.

          • firesteelrain 13 hours ago

            Understand the argument. No one is forcing anyone to make OSS. They do it because they want to. It’s like they are own worst enemy

            • hunterpayne 11 hours ago

              "It’s like they are own worst enemy"

              No, you are your own worst enemy. Because of your attitude, OSS is going to go away as will all those economic benefits you are enjoying. But keep up with the its OK to pee in the pool type ethics of yours. Let's see where that gets you in the long run.

              • firesteelrain 9 hours ago

                Ok -

                So you are annoyed I am using something for free and per the license that the authors set themselves and wanted no compensation. Got it.

    • hedora 19 hours ago

      First amazon was abusive. They abused their monopoly position to gain market dominance over upstream and didn’t contribute back monetarily or with code.

      Next, upstream responded with a license change, then amazon escalated with the fork.

    • tcp_handshaker 21 hours ago

      I lost my sympathy for many of the open source projects philosophy, the first time I sent a patch to Redis, one of the committers took as its own, never replying to my messages, and patched it in its name. They deserve Valkey.

      And I still remember JBoss and ahole Marc Fleury ...

    • silverwind 18 hours ago

      All those forks turned out to be inferior projects with substantially less contributions than the originals.

    • paulddraper 19 hours ago

      You’re reading “cloned and monetized” as “forked.”

      But in context, it means “cloned/downloaded and offered as a hosted service.”

      The fork came later, after the defensive license, which was in response to the clone+monetized hosting, eg ElasticSearch.

    • stavros a day ago

      Of course AWS didn't create the forks until the projects changed their license to disallow AWS from making money from their code! That's the whole point here.

      • jasonlotito a day ago

        When they changed their license, they were no longer open source. They could have chosen open source licenses such as the AGPL, but they did not. They were a non-open source company at that point, and AWS was putting out a product build on open source. Simple as that.

        Redis was not an open source company when AWS moved to Valkey.

        Companies are free to license under the AGPL if they want. Or other open source licenses.

        Sorry, but non-open source companies aren't getting sympathy from me because they are hating on open source projects.

        • stavros a day ago

          These were open source projects that had to change licenses away from open source because of AWS. I'm not sure how the OSS companies are the bad guy here.

          • sokoloff 21 hours ago

            I think there's plenty of room for people to object to the "had to change licenses" framing. They chose to change licenses, same as they chose the original license.

            That original license probably helped them with goodwill and to gain a community; when those benefits no longer exceeded the downsides of using that license, they changed licenses to one that suited them better.

            Naturally, this change costs them some amount of goodwill, a portion of the very goodwill that they harvested by choosing an open-source license in the first place.

            • stavros 21 hours ago

              I don't see this as an issue with the company. They were happy to release their code as OSS, as long as that allowed them to make enough money to develop the software. It was a win/win, and them AWS came and took advantage of that.

              If you leave some apples at the side of the road, with a sign "$1 per apple" or whatever, and people largely pay enough for you to continue to pick apples, that's great. If someone starts coming every day and taking the entire crate, I don't blame you for discontinuing the convenient apple sales, I blame the thief.

              • sokoloff 21 hours ago

                In this allegory, did AWS take all the apples in the crate while paying them $1 for every apple, thus becoming the bad guy?

                • stavros 21 hours ago

                  No. It took the entire crate and paid nothing.

                  • sokoloff 21 hours ago

                    I think there's a massive difference between "paying what was required by the offer" and "paying less than was required by the offer" and only one of them makes you a thief.

                    • stavros 20 hours ago

                      I think there's a massive difference between the letter of the law and the spirit of the law, and saying "but the letter of the law didn't say I couldn't!" doesn't make you any less of a thief.

                      • Chris2048 16 hours ago

                        > "but the letter of the law didn't say I couldn't!" doesn't make you any less of a thief.

                        Yes it does. And it's moot because the apples were offered for free, no restriction on usage.

                  • cthalupa 11 hours ago

                    Most of the companies behind Valkey were writing significant code for Redis. It was certainly not a case of them paying nothing.

                    Valkey has some of the (formerly) most prolific Redis contributors for the era in which it was forked.

              • paulddraper 19 hours ago

                This analogy falls apart because there wasn’t a price for the software.

                It’s like someone said “free whole apples, or $2/lb for sliced apples.”

                And someone came, took all the whole apples, cut them, and sold them themselves.

                • stavros 19 hours ago

                  Sure, but presumably you can engage with the spirit of the analogy?

                  Let's be pedantic, and say someone gave apples away in exchange for donations, and when everyone only got a few apples and donated, things are fine, but then someone decided they can just take all the apples and sell them elsewhere.

                  Is it the fault of the first guy for not offering free apples any more, or is the second guy why we can't have nice things?

                  • sokoloff 18 hours ago

                    > but presumably you can engage with the spirit of the analogy?

                    What you’re calling “the spirit of” the analogy, others are seeing as “the bias embedded in” the analogy and you seem annoyed that people aren’t accepting your proposed analogy as a valid analog to the topic under discussion.

                    You think they’re changing the subject; others, including me, experience you as the one doing that.

                  • Chris2048 16 hours ago

                    Why is an analogy needed? Just engage with what actually happened.

                  • paulddraper 19 hours ago

                    The donation example tracks.

          • yjftsjthsd-h 4 hours ago

            > I'm not sure how the OSS companies are the bad guy here.

            The formerly OSS companies, you mean.

          • bigstrat2003 16 hours ago

            There is no bad guy. The OSS license meant that AWS was perfectly free to do as they did. If the companies who licensed their software as OSS didn't want that, then they shouldn't have used an OSS license.

            • stavros 14 hours ago

              Ok, then fine, the companies who licensed their software as OSS did that for as long as they wanted to, and then they moved away. What's the issue here?

              • jen20 2 hours ago

                There isn't one? Either with the change or with the ensuing forks, in principle.

  • Galanwe 18 hours ago

    These arguments against AWS are boring. 99% of the negative comments are along the line of "so i have a dead simple product, I dont know anything about AWS, I logged in and it was super complicated and it seemed pricey".

    Well guess what, if you have a CRUD website and 100 users you're just not the target. Move on.

    Some days ago I wanted to sketch a 3D model of my TV remote. I opened blender and what a mess of complicated windows and panes. I closed it immediatly. Do I think Blender is an over complicated mess? No, I just think I'm not the target. And I'm not offended to be too noob to use it.

    • anymouse123456 18 hours ago

      I agree, this is a common story and your point stands for some significant percentage of the complaints.

      It should be made clear though, that some of us helped spend many millions in obviously wasteful on-prem infra in the nineties, bought into AWS wholeheartedly when it came out, fought through the ignorance, developed the ability to deliver highly scaled applications on the platform over many years and at least some of us still carry those same beliefs:

      - It's more complicated than it needs to be

      - It's more expensive than it should be

      - Pricing is more opaque than it should be

      Meanwhile, the cost of other options (including self-managed, on-prem infra) has fallen massively since those early days of AWS.

      • tempest_ 17 hours ago

        Prior to the RAM crunch you could buy 4 or 5 servers ~50k that would be more than capable to handle many enterprises needs. The thing is the industry has sorta lost the skill set to host and maintain them. The people who can do this still exist of course but they are outnumbered by the YAML jockeys 10 to 1.

        There are also other things that the cloud hides in its price as well. Redundant networking, provisioning, rack space, internet connections, firewalls, UPS backup, power usage.

        Still I think a lot of startups would benefit from hosting their own stuff if they intend to be a long term business instead of just shooting their shot and hoping to be acquired.

        • reactordev 13 hours ago

          No, you misunderstand, it's not that we lack the knowledge or skills (we don't!) it's that the backbones and pipelines all converge on these hyperscalers and that's where you get the best throughput and least latency.

          I clearly remember having a discussion with a very VERY large company I worked for at the time about getting some NVidia hardware for our own enterprise data centers and they flat out refused. Now, they have lost any advantage they could have had.

          The issue with AWS is that they started off cheap, easy, simple and grew into an enterprise mess complete with opaque pricing. That's an issue. The complexity itself has created a whole new lane of work for the SRE where they can specialize in AWS and not do anything else. It's grown beyond just a cloud provider. People who are still expecting a cloud provider are going to be sour about it.

      • regularfry 13 hours ago

        This is borne out by the fact that there are alternatives that are:

        - dramatically simpler

        - cheaper

        - easier to budget

        while retaining the scale-on-demand and hide-the-actual-hardware properties that the industry jumped for joy at. What they don't have is the nobody-got-fired-for-rearchitecting-to-aws bit.

    • jph00 12 hours ago

      There's always someone making this claim when negative comments about AWS come up.

      They almost always come from people that don't have experience running substantive infra at scale without AWS, so they can't make an informed comparison. The complexity of doing so, for a lot of infra, turns out to be lower than using AWS. Also, you end up with transferable skills and a deeper understanding of the foundational protocols and systems. And you save a lot of money, both because you don't have to pay to manage that complexity, and the systems themselves are cheaper.

    • senko 18 hours ago

      If you want to design TV remotes, you better learn Blender.

      If you want to host something complex enough to warrant AWS, you should also understand how to run it yourself.

      These arguments for AWS are boring and sound like uninspired regurgitation of their sales pitch. I recall hearing the same about IIS and Windows a few decades back.

      Turns out, they both have pretty good marketing departments!

      • voidUpdate 3 hours ago

        If you want to do actual design, I'd recommend a parametric modeller. Blender really just doesn't cut it for that kind of thing, even with addons

    • gizzlon 14 hours ago

      I see a lot of learned helplessness around this stuff. People managed fleets of servers before the cloud you know, it's not impossible.

      Cloud has pros and cons, both for small and large setups. I've spent ca 10 years working with GCP, and as the article says, there's a lot of complexity in these systems as well. And the network cost.. yikes

    • cryo32 17 hours ago

      Nope. We have an incredibly complicated product, a bunch of actual experts and paid up high level enterprise support.

      It is about 8x more expensive to run it on AWS than it was on actual hardware. And that's using their reference architecture and designs. And the sprawling nature of AWS services and uptake makes it pretty damn hard to get out. We are slowly and quietly migrating everyting to IaaS / kubernetes so we can get it out again. Just moving to kubernetes and packing stuff tight on EKS and thus kubernetes has shaved 30% of our costs off already.

      We were sold a lie and fell for it hook, line and sinker.

      Edit: also fuck things like Lambda. It's literally the most horrible experience that the universe can muster. Moved most of our lambdas to simple boring http services on top of Go and just leave 20 instances running. Just not having to deal with CloudWatch saved us more money than Lambda could have.

      • tacticus 12 hours ago

        > Edit: also fuck things like Lambda. It's literally the most horrible experience that the universe can muster. Moved most of our lambdas to simple boring http services on top of Go and just leave 20 instances running. Just not having to deal with CloudWatch saved us more money than Lambda could have.

        imagine if instead of being a tied in to aws special interfaces lambda had shown up as closer to cloud run!

        Though hopefully not the knative style that azure first went with and the LOOOOONG start times.

        • cryo32 6 hours ago

          It'd still suck compared to a completely boring process you can just run on your desktop by ./'ing the executable and looking at the console output. Then chuck it in kubernetes as a ReplicaSet.

    • padjo 6 hours ago

      But that's not what this article is? The author is clearly a long time AWS user and former evangelist who has soured on it as it has become increasingly bloated.

    • smallnix 18 hours ago

      The main issue with account suspension is not boring to me.

    • akvadrako 15 hours ago

      It's true the comments get it wrong. But their main point stands; they shouldn't use AWS.

      It's also true that most companies which AWS does target shouldn't use it either, unless you have a good reason why ( like you need data centers in every continent or to quickly scale to 10+ thousands of cpus ).

      • Hendrikto 3 hours ago

        > like you need data centers in every continent or to quickly scale to 10+ thousands of cpus

        Which for some reason many people think they need, while in reality 1% actually need it.

    • pier25 18 hours ago

      Maybe but that doesn't mean that the AWS console isn't a royal mess.

    • bellowsgulch 15 hours ago

      that's not a great argument: any professional who doesn't know their operating costs is barely a professional

      would you be more enamored by roofers who came to your house and couldn't break down your quote because they were too professional to know the cost of asphalt shingles?

      is it more sophisticated to you that you go to a fish market and the price of the goods isn't listed and you have to ask the cashier for every catch?

      perhaps we should all be artists who walk in to supply stores purchasing oil paints not caring what the tubes costs because you're not the target if you want to know the cost of your materials

    • ray_v 16 hours ago

      Did blender charge you thousands of dollars when you touched it wrong when you tried to learn to use it? /s

      • pseudohadamard 6 hours ago

        > Did blender charge you thousands of dollars when you touched it wrong

        No, but it did press charges. We settled out of court, but my wife left me over the whole affair.

  • raffraffraff 20 hours ago

    I always smile at posts like this. They're right and wrong at the same time. Systems should be "as simple as possible, but no simpler". And thinking that you can gloss over the detail is just going to create more hassle later on.

    IAM is just complex. I can't think of any implementation of "users, groups, roles, policies, identity providers, oidc" that is truly simple.

    I'm reminded of a guy I worked with, who fought against Kubernetes adoption because it was "too complex", only to slowly reinvent Kubernetes badly, adhoc, out of vault, consul, systemd, nomad, iscsi, ansible, jenkins, puppet, bash, spit, glue... making lots of mistakes along the way. You think you don't need to implement some feature until you do.

    Another thing I'll say about AWS (having been the sole infra guy at a few startups) is that it's well within most people's abilities to learn it. And you can usually avoid the shitty stuff. You think lambdas stuck? Don't use them! You could use EKS, ECS or bare EC2.

    • Aperocky 19 hours ago

      Some internal perspective - IAM has maybe thousands of options but fundamentally it is "what does this role have access to doing (action + resource)" + "who has access to this role". That is really it from a 10k foot level.

      IAM is great because it applies internally just like it does externally. The internal AWS team don't get more access than you do, and if we get access to do certain thing on your account to perform specific service that's because you have a service principle in your IAM trust relationship that allowed us access, that you can see, and audit. For instance, lambdas have lambda role because you don't want lambda service just reading your S3 buckets because "we're AWS we automatically get access", you can absolutely see and control access, even if it is internal to AWS.

      • amluto 17 hours ago

        > but fundamentally it is "what does this role have access to doing (action + resource)" + "who has access to this role". That is really it from a 10k foot level.

        Hahahaha. No, fundamentally it is one input into a huge mess that you cannot actually see or audit from a 10k foot level.

        AWS has produced a long, rambling and imprecise description of (some of?) what’s actually going on. You can read it here:

        https://docs.aws.amazon.com/IAM/latest/UserGuide/access_poli...

        Some of what they’re describing doesn’t even live within the IAM umbrella as far as I can tell. I’m not convinced that a concise, formal and unambiguous specification exists anywhere, even within AWSes own development teams.

        I’ve asked LLMs to write AWS “policy”. They get the grammar mostly right. They cannot explain what the effects are in a manner that they will stand by after they search the web for documentation. Since I have never found good documentation despite looking, I can’t personally do any better than the LLMs. I’d love to be pointed at real documentation or specs.

        • Aperocky 10 hours ago

          They are just some slight variation of the fundamental idea. For example resource policy and org SCP are just the same check on a different level (e.g. more of who has access to what). They are attached to Organization and individual resource respectively (vs Account) so they need to exist in a separate place. And then in use they are ALL checked before an access is granted.

          I don't work for IAM but I worked for several other teams over the years and IAM is actually one of the least confusing services. But I am definitely biased and have more than average amount of experience on this particular subject. I still think the general idea is more sane than Azure Account for example. I do think this reflect on the philosophical level where whether cloud are building blocks or are they consulting projects. I personally think IAM is done right in that regard.

          • amluto 9 hours ago

            > And then in use they are ALL checked before an access is granted.

            I know they’re all checked. What don’t know is how the results of those checks are combined to get the final result. As far as I can tell, the result is not something like OR or AND — it seems like it’s something exceedingly complex and that the output of the policy part may be more complex than just a Boolean value.

            Maybe the underlying implementation is fantastic (and my distinct impression is that AWS takes this stuff far more seriously than Azure), but that doesn’t mean that the docs are easy to find or that the system actually makes sense in anything other than an agglomeration-of-backwards-compatible-layers sense.

      • bkaraaslan 7 hours ago

        One thing i would like to see in IAM would be sonething like verb actions, currently, if you want to give least privilage, you have to trial and error your api call until you get it right. Since aws have a very good api definition on all consumers (rest, aws-cli, boto uses same strucyure), i think it would be doable.

        I mean something like actions: s3:cp Resource: bucketarn/key

        Most of the time, actions are self explanatory and good enough, but i recently gave a developer permission to scale an asg, and it required a lot of unguessable actions, if i were to give "actions: scale" (forgot the correct cli parameter for it), it would make more clean env

      • paulddraper 19 hours ago

        > that's because you have a service principle in your IAM trust relationship that allowed us access

        That’s why it’s so complicated!!!

        I don’t understand how I should evaluate trust for your internal EBS org versus your internal ALB org.

        I kinda just expect it to be all “AWS” trust.

        And it’s all garbage anyway. There’s no way I can prevent the hypothetically untrustworthy EBS team from surreptitiously adding charges to my account if they want to. Right? This would maybe make some sense if I could top level turn off/on services, but that isn’t how it works.

        —

        I have no doubt this makes some sense from someone inside the machine, but from the outside it’s not helpful nor useful.

        • Aperocky 17 hours ago

          3 things to untangle here.

          1. It's about trust and auditability, while you may not want or need it, there are a lot of customer that are either interested or legally obligated to know who have accessed certain data.

          2. It's about dogfooding - how would you trust an identity and access system when the company does not even use it internally?

          3. In general, there are quick buttons and template to do it if you don't want to worry about it, in the LLM age, this gets easier. Personally I prefer this because I intensely dislike "magic". This allow you to control, to the maximum degree possible, what is actually going on, despite not owning any of the physical aspect of the data center.

          • regularfry 13 hours ago

            1. It's about imposing worst-case complexity on the 99% of people who will never benefit. 2. Some of that complexity only arises because of the dogfooding 3. No it doesn't get easier, because you still need to understand what those things actually do to know if they're right for your use case, and besides if you're driving everything from terraform then having a "quick button" is precisely useless.

            We had an AWS rep try to sell us on an AI tool to help with predicting the IAM permissions that our infrastructure code needs. My response was, essentially, "why have you built a deterministic system so complicated that it needs an AI to configure correctly?" I have not had an answer.

          • paulddraper 7 hours ago

            That's all fine and good, but I still don't know how much trust the EBS team versus the ALB team.

            And I don't think you do either.

        • jimbobimbo 18 hours ago

          >I kinda just expect it to be all “AWS” trust.

          This would be very unwise from security standpoint. Internal access to customer stuff is granular and made hard for internal staff to gain, to minimize chances of screw up intentional or not.

          • NewJazz 18 hours ago

            I agree. Adding a service principal always raises an eyebrow for me, just a blanket "hey we're aws trust me bro" is a little bonkers.

            • paulddraper 8 hours ago

              How does this work in Azure and Google Cloud?

    • hedora 18 hours ago

      IAM is unnecessarily bad. I recently had to set a trivial policy, and was doing it correctly.

      The console kept warning me that I was giving root AWS access to my external application because they want people to use the locked in AWS path, and I was running off cloud.

      On top of that, they break copy paste on the web console, so you can’t just ctrl-c ctrl-v and then ask Claude to explain their WTF-ery. Instead, you have to OCR or send a PNG.

      I honestly did not think they could make IAM worse, yet here we are. Bastards.

      • joshcartme 7 hours ago

        You could probably open the developer tools, find the console elements and extract the data from there to get around copy/paste limitations. I’m not familiar with the AWS console but let’s say it’s an input, select it in the dev tools and then in the dev tools console do $0.value

      • raffraffraff 14 hours ago

        You think that you're complaining about IAM, but really you're complaining about the web ui. I rarely use the web console, I use terraform or the cli. I'd you're vibe coding your infra with Claude, point it at the cli / terraform. Skip the ui.

        • regularfry 13 hours ago

          With terraform you get the amazing experience of having to iterate, one at a time, through the five hundred and thirty seven new permissions you need to grant having decided that a lambda configuration needs to be ever so slightly different than it was yesterday, because there's no documentation linking terraform creation of resources and the IAM permissions required to successfully make the AWS API call behind the scenes. Or those for updating a resource, which are different, so you get to do it all again tomorrow. Or deleting - different again. Fun for the day after.

      • pseudohadamard 6 hours ago

        Agreed on IAM, and TFAs comments that once you see the horrendous complexity in IAM you start seeing it everywhere else in AWS as well. And with IAM, after all the effort you've put in, you can never really tell what is and isn't enabled. If you run your own server you can check permissions, run access-control audit scripts, and so on, and say with a pretty good level of confidence that X is possible and Y isn't. With IAM it's more like "I'm pretty sure I figured out the right silly-walk for X, but I have no way to tell what else might be enabled".

        AWS: I came, I saw, I threw up in my mouth a little, I left.

      • hedora 18 hours ago

        I guess I should also point out that I’ve used AWS at extremely large scale in the past, which is why I’m running this subproject on another cloud.

        As for simple permissions, go read the UNIX paper. It spends a page or two on their approach and is all you need.

        Then, read the paper on mapping between NTFS SMB ACLs and NFS. It’s either impossible or undecidable, depending on the deployment. IAM is from the windows acl lineage which is known pessimal from a usability and security perspective.

        • cyberax 17 hours ago

          IAM is NOT from any lineage. It has grown organically and is complicated, just as any other policy language. AWS even uses an automatic proof assistant to verify IAM policies.

          However, the secret to IAM in AWS is to NOT use IAM. Just create separate AWS accounts for separate services and only share whatever resources are needed. Then you can have dead simple IAM policies because you won't need to do granular permissions ("AWS role X can access database Y").

          • dotwaffle an hour ago

            > Just create separate AWS accounts for separate services

            My understanding is that different AWS accounts have different mappings of availability zones, so it's very easy to suddenly find yourself with an unexpected bandwidth bill due to all the cross-az traffic.

            I've been irritated at AWS (and the other large cloud providers) that they charge $0.01/GB for cross-az traffic. That's $3.24/Mbps -- about the same I was paying for internet transit (as in: from London to anywhere in the world) 20 years ago, and this is just between two datacenters in the same city controlled by the same organisation, markup must be 10,000x or more considering these places are cross-connected with massive bundles of fiber!

    • hunterpayne 10 hours ago

      "who fought against Kubernetes adoption because it was "too complex", only to slowly reinvent Kubernetes badly,"

      If you are dynamically scaling a set of web services sure. The problem is that people use k8s for running batch pipelines and streaming analytic services and a bunch of other things too. And k8s is terrible at doing those things and entirely too complex. And if you don't have to scale your web services very often, then k8s is a waste in that case too. Its a right tool for the job and k8s's job isn't deploying to the cloud, its dynamically scaling a website.

    • le-mark 20 hours ago

      > fought against X adoption because it was "too complex", only to slowly reinvent X badly

      This is a surprisingly common pattern in technology and software. Some things are definitively the “standard” at this point yet so many people simply refuse to spend the time to properly learn them.

      • ipsento606 19 hours ago

        > This is a surprisingly common pattern in technology and software. Some things are definitively the “standard”

        It is also a surprisingly common pattern to adopt very complicated solutions for applications that are never going to need them

        ultimately it is not possible to come up with a "standard" that is an acceptable replacement for good judgement

    • cyberpunk 20 hours ago

      Another point is that while it can be more expensive than self hosting, the savings are dwarfed by the engineering costs. A decent infrastructure engineer working for 2 man months on your “money saving” ovh setup costs you more than you can possibly save by not just using fargate or rds whatever.

      • amluto 17 hours ago

        How much would you pay for 2 months of infrastructure engineer time? And how many millions / tens of millions / hundreds of millions are you imagining being spent on overpriced AWS services?

        (Also, those AWS services are not engineering-free. I tried to migrate a system to RDS once and gave up after quite a few hours when I got to the part of the documentation that suggested that I edit my sql dump using sed to get it into a form that RDS would accept. No, thanks.)

      • noprocrasted 19 hours ago

        But unless you're on a PaaS, you have "infrastructure engineers" already. So why not at least let them make back their salary by making them built a cost-efficient infrastructure?

      • izacus 19 hours ago

        This is rarely actually true but it's a common falsehood told by people who have financial interest of keeping everyone on AWS.

        And that includes engkneers that only know how to use AWS and are terrified at having to learn something else.

    • Aeolun 13 hours ago

      I think the big problems with amazon IAM is not that it’s inherently complex, it’s that every team in AWS came up with their own way to define permissions and the calls these allow you to make. So the API Gateway set of permissions uses a completely different method for no discernable reason.

  • sudosteph a day ago

    I'm surprised by the author's hate towards DynamoDB. It's probably one of my favorite AWS Services. Great availability and no operational overhead. Cost was pretty minimal too each time I've used it, but you do need to spend some time architecting your data model up front, and that requires reading service docs and understanding it.

    • andoando 19 hours ago

      We used DynamoDB pretty much exclusively at Tinder, cause it was the founders choice early on. Horrible horrible choice and after 4 years working on it I dont see why you would.

      1. you have a limited number of global supported indexes, 5 iirc, which means your queries are very limited. If your use case ever expands beyond that you're pretty screwed. 2. You will have race conditions. Strong consistency is 2x the cost, and not supported on global indexes. 3. Data is split into 10GB partitions and all the read/write quotas are split evenly by the number of partitions. 100 reads you're paying for is actually 10 reads per partition if you have 10 partition. Hot sharding becomes a real problem.

      Take your document data, stick it in a JSONB and you get the same performance way cheaper + query able/indexable columns. The only time Dynamo wins I think is it scales well globally, but you probably dont need it

      • rowanseymour 15 hours ago

        IMO if you've got a use case that requires querying in so many ways that you need several indexes, then DynamoDB is probably the wrong choice. It excels at stuff like user specific histories that are well partitioned, read back in one way, and ideally can be written asynchronously by a separate writer process.

        • andoando 14 hours ago

          At the beginning there was only one query, it got expanded over time with new features. It wasnt well thought out, no.

          If you need high scale globally distributed persistent data, uniform distribution of hash reads/writes, dont care for schema, and know your query will remain simple yeah its a fine choice.

          I just wouldn't consider it outside of enterprise level

      • mastazi 10 hours ago

        > you have a limited number of global supported indexes, 5 iirc

        you can create 20 global (GSI) and 5 local (LSI) indexes per table[1], I think the number must have been lower at some point in the past, because it's not the first time I hear this complaint

        [1] https://docs.aws.amazon.com/amazondynamodb/latest/developerg...

        • andoando 10 hours ago

          No I just misremembered and mixed up the global and local.

    • ufmace 21 hours ago

      That's pretty much what I came into this thread to say. The thing I'd add is, DynamoDB is pretty nice if you understand how it's meant to be used - a relatively dumb key-value store with good persistence and table-size scaling to the sky. Definitely don't attempt to use it as a SQL database.

      The best way I can come up with to rack up a $75 bill for some prototype code is to vibe-code a thing that attempts to treat it like a SQL database with JOINs and GROUP BYs etc. Or similarly write code against it absent-mindedly with about as much understanding as a 2-year-old free AI tool.

      Where it really shines is use-cases like I need like 1 or 2 simple relatively small tables of persistent storage and don't want to deal with a full RDBMS. Or I need 1 ridiculously huge table to be queried in a relatively simple way, and don't want to deal with fitting that data into a RDBMS.

      • andoando 19 hours ago

        With AI now writing queries is a joke. But you can just create a two column table: key, JSONB and call it a day and you get your easy document store + indexes, json search, relationsl goodness, and atomicity, consistency for free

    • IsTom 3 hours ago

      I might be holding it wrong, but last time I tried to use DynamoDB it made absolutely no sense performance-wise to me. Postgres on my laptop was many orders of magnitude faster for fraction of price. It seemed like it maybe might make sense when you hit multiple TBs of database data and can no longer run on a single server? But then the costs would be sky-high and you probably could engineer your way around this with this kind of money.

    • aorloff 20 hours ago

      Dynamo, and a lot of the other services mentioned (Lambda) have very specific use cases. Do not use happy fun key value store as your database.

      • ovao 19 hours ago

        I'd say "use it as your database if you know your access patterns make it suitable/well-suited for its use as your database". Even then it will probably not be your only database — if it's part of your MSA/SOA.

        I would not build in DynamoDB if you suspect your access patterns will drastically change over the lifetime of the application (or if you intend to, e.g., plan to build a data warehouse or something crazy with it).

    • sambellll 21 hours ago

      Here to say the same thing.

      I built an app a few years ago and needed some sort of DB to store around 50 million records that had ~10k reads+writes per month with 1 index. It cost me something like $50 to load it up initially, and then something stupid like 10 cents/month to maintain.

  • tailscaler2026 19 hours ago

    Anyone considering leaving AWS and thinking they'll transfer all their data for free [1], I've got news for you: It's a lie.

    AWS takes as long as possible (for me it was a month) to respond to the initial DTO request, then require you to submit a multi-page form answering a barrage of questions about why you're leaving, where you're going to, what services you used, and estimated data egress. A week or so later, if they approve the request, you're not allowed to begin DTO until 60 days after the approval.

    By the time you can egress your data for "free", you've been stuck on AWS for 3-4 months since you first made the decision to leave.

    [1] https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-i...

    • cryo32 17 hours ago

      If you have a lot of data it's cheaper to lease a Direct Connect line into an AWS zone and suck it out through that.

      • layoric 12 hours ago

        I might be reading the pricing wrong but you have to pay per hour for the port plus per GB transfer? And looks like the cheapest is $0.02 per GB? Is that really the 'cheap' option? That looks fine for a TB or two, but still crazy when getting closer to PBs.

        • cryo32 6 hours ago

          You do but you can actually negotiate discounts with AWS when you get to Direct Connect level. It's only cheaper than the other options. It's never acceptable.

          This is why we're slowly and quietly moving back to a couple of cages in a DC. Well we were until the AI companies bought all the fuck RAM and SSDs.

        • stingraycharles 10 hours ago

          Yes that’s the cheap option.

          But to be fair, I deal with several customers that are in the double digit petabyte scale. When you’re operating at that scale, and have 7-figure AWS bills on a monthly basis, AWS is suddenly a lot more available to you and much more willing to accommodate pretty much anything.

    • otterley 14 hours ago

      > A week or so later, if they approve the request, you're not allowed to begin DTO until 60 days after the approval.

      Do you have proof of this? That is not disclosed in their policy.

  • djyde a day ago

    I've transitioned between cloud services and self-hosting a few times:

    1. Vercel Phase My first project used Vercel. Since my project was Next.js, the experience was decent. But as my project gained some users, I found that even for projects under 100 users, I needed to pay $20 per month. Since my service didn't require high performance, this cost felt steep.

    2. Self-host Phase (Hetzner + Coolify) Later, I started setting up my own server with Hetzner and deploying with Coolify. Since Coolify is open-source and free, I only had to cover the cost of a VPS (even $5 a month was sufficient). I could deploy PostgreSQL instances and run a web server on it. But later I discovered that even this way, I still had to spend a lot of effort maintaining PostgreSQL and Redis. Even though they were containerized with Docker, managing them was still troublesome. I needed to pass various system and environment variables between services, which was very tedious.

    3. Cloudflare Phase So later I switched to Cloudflare. With Cloudflare Workers, I can deploy fullstack applications and use D1 Database and Cloudflare KV to replace Redis. These features can be called directly within the Worker without needing to pass environment variables.

    Plus, the local development experience is excellent and the pricing is very reasonable, so I've been using Cloudflare's entire suite ever since.

    • causal 21 hours ago

      Yeah Cloudflare offering has become everything I wanted from AWS. So much simpler to deploy a basic full stack app + files. AWS has become considerably more difficult than self hosting.

      • AdityaAnuragi 20 hours ago

        I agree, I'm starting to like cloudflare increasingly aswell

        Here are a couple reasons of mine (PS I'm still a little new)

        1) V8 isolates for serverless functions to address cold start problems, sure the entire node env ain't there but libraries like Hono are designed to work in that env... Combine that with their near immediate start-up - simple lovely

        2) UI, AWS to me feels soulless, like if there's an entire industry to make AWS UI not suck it's obvious their UI is just bad, upto the point where people pay a premium for a good UI. Cloudflare UI is so much nicer, atleast to me

        I recently developed a library and for that I made a landing page and documentation with Astro (no server just static stuff), and I was checking out how to deploy this and Vercel and Cloudflare, Vercel had a 100Gb/ month of bandwidth free which is nice, what's even nicer is cloudflare has infinite (practical infinite not the theoretical infinite ofcourse)

        And once again, that's just lovely to work with!

      • adamddev1 18 hours ago

        I've really enjoyed using CloudFlare and I've been impressed, but I'm afraid it will descend into a broken mess as they enthusiastically use more and more vibe-coding.

    • aleksiy123 18 hours ago

      Went through a similar phase,

      I think a mix of 2. and 3. is good for a small team or solo dev. Im throwing in a bit of homelab as well by adding some action runners and models on my desktop as well.

      But cloudflare is great value for small teams. Not sure how it as at higher scale.

      On the topic of env and config. It took me a while to get this write, and maybe overengineered.

      But I invested a lot of time in trying to standardize env definitions, secrets manager, and per env config definition defined in my nx projects, and consumed by the commands or deployers. As well as pulumi for IaC.

      I tried a couple of different approaches, but finally I just decided to use typescript as my config language. I use nx project.json but defined using typescript. And just define the env config as typescript functions to be injected to each command or deployment as a pure function of target env.

    • pier25 18 hours ago

      I've been using CF Workers since 2020 or so. The biggest con is that your app will be coupled to their infra. It's less coupled than eg Firebase but still.

      For the past 10 years or so I've mostly used Heroku end then Fly. Last year I invested time into switching to self hosting with dokku for new projects. After the initial learning curve it's been great. Honestly don't see the point of using anything else except if I need to run something at the edge.

    • cube00 21 hours ago

      I really wanted to like Cloudflare Workers and I'm sure there's good technical reasons but the way you need to use a Wrangler project to do things like enabling email felt too much like I was about to get locked into the platform.

      It seemed like the bindings you needed to set to allow email can't actually be set (or even seen once Wrangler sets them) from the console at all.

    • graemep 20 hours ago

      > Even though they were containerized with Docker, managing them was still troublesome

      Did docker make it easier?

      The only issue I have with PostgreSQL is a bit of migration effort moving to new major versions.

      > I needed to pass various system and environment variables between services, which was very tedious.

      Was docker making this harder?

  • jfengel a day ago

    I don't work in that area, so I only touch AWS once in a while for personal fun projects.

    And every time it's a nightmare. I'm just banging out a server for my experimental card game, not setting up an new financial institution. Everything looks as if I'm preparing to scale to infinity tomorrow, with a staff of a thousand and a budget backed by VCs.

    Fortunately there's Netlify and similar, who put a gloss on it so that I don't have to boil the ocean. I figure that one of these days I might actually be forced to learn IAM and VPNs and God only knows what else. Meantime, every time I touch it my eyes bug out.

    • chuckadams a day ago

      You can just spin up a raw VPS on EC2 or Lightsail, give it a public IP, and call it a day. You aren't required to implement every enterprise pattern in the book.

      • embedding-shape a day ago

        If there is any single service I'd avoid on AWS it's Lightsail, it'll cost you a lot more than almost anything out there, is slow as molasses (even tiny services can need tens of minutes to deploy) and you'll experience random failures not even AWS reps can explain to you. Avoid at all costs.

        It's a ghost of its former self, but I'd probably still rather use Heroku today than being forced to use Lightsail even once again.

        • callmeal 19 hours ago

          >Lightsail, it'll cost you a lot more than almost anything out there,

          Lightsail is pretty competitive (price wise) with other providers. Been running s B2B app on it for a few years now - nothing much, just your basic crud app running on lightsail instance + lightsail db. Nice to have a "monthly" rate on each instead of the EC2 opaque (and "surprise!") pricing.

        • graemep 20 hours ago

          I have only use lightsail for one project with two VPSs, but it just works like a VPS (two, because we have another for staging). Price is competitive.

          Its not my favourite, but its not terrible.

          • Insanity 20 hours ago

            Same experience here, hosting some small projects on LightSail. It was pretty smooth to set up and get running, and no real complaints so far.

      • themgt a day ago

        Congrats, your raw EC2-hosted 500MB WebGL experimental card game went to the HN Front Page! You now owe AWS $30k in egress costs.

        • aorloff 20 hours ago

          Well this is the dream right ?

          You build something, well enough that it can handle the traffic, and people come, and it does.

          Welcome to the gaming industry

          • baobabKoodaa 16 hours ago

            No, it is not the dream. The same thing on Hetzner or Linode would cost $30 instead of $30k.

          • hkpack 11 hours ago

            > Well this is the dream right ?

            Yes it is, we call these dreams a nightmare

        • nostrebored 21 hours ago

          Egress costs have substantially reduced (thankfully)

      • ipsento606 19 hours ago

        > You can just spin up a raw VPS on EC2 or Lightsail, give it a public IP, and call it a day

        You could do this, but for the life of me I can't imagine why you do this over using a platform like DO, vultr, hetzner or any one of a hundred similar services that will give you a better developer experience for this kind of workflow, often at a fraction of the price

        • chuckadams 19 hours ago

          I never said it would be cheaper. I did say it wasn't complicated.

      • DaanDL a day ago

        But that's costly. Speaking of my own experience: going from a webapp fully hosted on an EC2 instance to a railway and vercel setup reduced my costs 10x.

        • liveoneggs a day ago

          t4g.nano is $3/m; a similar spec-ed fargate on ecs (just any docker container) is $10/m

          • jfengel a day ago

            This sentence beautifully encapsulates my point. I know that this is just ordinary jargon, but wow that's a lot all at once. And it does seem like something I need to know before I start.

            • liveoneggs 18 hours ago

              sure but on the flip side - when I signed up for vercel I had literally no idea what was going on. It just said "do you want to start a blog? here are 1000 templaptes"

        • chuckadams a day ago

          Maybe so, but it's still not the complexity nightmare that some would have us believe it is.

      • te_chris 21 hours ago

        “EC2 or Lightsail”. And this right here is why I use GCP. Google got VMs right.

        • nostrebored 20 hours ago

          GCP has similar offerings to Lightsail, Fargate, EC2, Lambda, or other compute substrates. Nobody is forcing you to use more than “basic” offerings. AWS core services are often architected that way!

    • benoau a day ago

      What amazes me is how Heroku absolutely nailed what most web apps need nearly 20 years ago.

      • ChrisBland a day ago

        I miss heroku dearly. somewhere at Salesforce there is an exec who killed the product and shifted it to enterprise and is now looking at the vibe coding revolution seeing their opportunity missed.

        • iamflimflam1 a day ago

          I suspect the people responsible have fully justified to themselves any decisions they made, helped along with any bonuses they got for doing it.

        • christophilus a day ago

          Render has been excellent replacement, in my experience.

          • baobabKoodaa 16 hours ago

            Last time I tried render, it did not allow me to spin up 1 instance of my web app, so I'm never going back. (To clarify: Render would always spin up a minimum of 2 instances.)

        • datadrivenangel 11 hours ago

          I've been enjoying railway!

        • maccard a day ago

          Digital ocean is the answer. You give it a container and off you go.

          • baobabKoodaa 16 hours ago

            Not sure how Digital Ocean is comparable to what Heroku used to be.

          • ipaddr a day ago

            Use to be now they are requiring 2fa for addon domains over a certain amount

            • ceejayoz a day ago

              Of all the things to be upset about, mandatory 2FA doesn't seem like one.

              • ipaddr 21 hours ago

                2FA has been in place for years through email but this new requirement forces a phone.

                • ceejayoz 21 hours ago

                  Good. E-mail based 2FA is bad, and they appear to support TOTP too as an option, as they should. Wish they supported U2F though.

                  • ipaddr 18 hours ago

                    Why is email based 2fa bad but phone good? There are classes of issues you get through phone 2fa compared to email

                    • ceejayoz 18 hours ago

                      Typically, you can also reset password via email, so it's really only one factor. Compromised email = compromised server.

            • maccard a day ago

              It’s negligent to not use 2FA for any cloud platform where credentials can be used to spin up resources.

              • ipaddr 21 hours ago

                I should have been more clear 2FA has been in place for years the phone requirement is new.

                • jlokier 19 hours ago

                  They use TOTP for 2FA (industry standard), which doesn't require a phone.

                  Their help page lists a bunch of 2FA app options, all of which run on phones, so it's understandable to think a phone is required. (I'm disappointed they don't list the app I use, which is Aegis Authenticator.)

                  But actually you can use any TOTP app, and they don't all need a phone. For example, macOS (desktop) has built-in TOTP 2FA as part of the password manager.

            • esseph 21 hours ago

              Good! Should have been done long ago

        • catlifeonmars 18 hours ago

          More likely than not they’re probably long gone, or have completely forgotten. The idea that someone out there regrets that decision is laughable. The fact that it’s laughable is sad.

        • cpursley a day ago

          Fly and Render are what heroku would be if they didn’t stop innovating. And neon db for Postgres.

          • trashburger a day ago

            > And neon db for Postgres.

            For 90% of the time when they're up.

          • baobabKoodaa 16 hours ago

            Fly is unreliable. Render does not allow you to spin up 1 instance of an app.

        • the__alchemist a day ago

          Why? It is still up, and working just as it used to.

    • KptMarchewa a day ago

      it's only a nightmare if you had not to deal with Azure

    • djyde a day ago

      I switched to Cloudflare and it's been a breath of fresh air - everything I need and the pricing is reasonable.

    • MagicMoonlight a day ago

      AWS is aimed at enterprise, not personal projects. Personal projects wouldn’t give them any meaningful revenue because the only thing that matters is cost.

  • aljgz a day ago

    Years ago, I joined a company, took over a dev team and was asked to launch the product in 3 months.

    They were using AWS, so I logged in the account to add a few more machines. Right there, in front of my eyes, were the signs of an adversarial, abusive relationship.

    The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.

    I had to have the two tables open, cross check the specs and price.

    If I had learned one thing from my past life was that if you see the signs of an abusive relationship, you have the option to walk out, and you don't, all that follows is your own fault.

    Created a DigitalOcean account, moved everything over. Set up our CI/CDs to deploy there, and spent the next two months on the product, launching one month earlier than promised.

    Some years before that I saw a video online where a person digs a hole near a river and puts a pipe connecting the river and the hole. The fishes push themselves hard in the pipe to get to their trap. Choosing the path of least resistance, and never backing off from a mistake: recipes to end up like those fishes. The video left a big impression on me.

    • clickety_clack 18 hours ago

      People don’t seem to realize how simple pricing can be made for the user. I switched to digital ocean too, and it’s great. I think people think it’s not really engineering if it’s not complicated, so they stick with these insane AWS/GCP/Azure setups. But it’s not 2012 anymore, this stuff has been figured out and commoditized, and managing your cloud setup should be “easy” for 99% of products.

      Edit: and when I say “99% of products”, I mean “99% of products where the team thinks they are building something too complicated for a simple setup”

      • zrobotics 16 hours ago

        Plus, I've never understood the argument that cloud is better because you don't need to deal with the complexity of managing a server. Yes, it's a very deep topic and there's a lot of nuances to managing a Linux box serving web content, but we've been doing that for decades and there is tons of information and tooling available.

        Every time I've needed to manage something on AWS I've been shocked at just how over wrought the whole system is. There's tons of As-specific terminology for everything, and lots of stuff is tremendously complicated to manage. I can definitely understand why companies need to hire people who are experts in AWS specifically, it's complicated enough to justify that. However, for me personally I'd rather learn more traditional sysadmin systems. The skills are more evergreen, and I'd rather spend my time learning open systems than one tech giant's specific system.

        About 6 months ago I needed to migrate some of our systems from DigitalOcean to Hetzner. It was a 2 day process that was very painless. The only complicated bit was managing the DNS switchover with zero downtime. If we were moving those same 3 components from AWS to GCP or Azure, it would have involved needing to rearchitect and rewrite a lot of software.

    • aurareturn 21 hours ago

      Doesn't Amazon engineering culture have a very engineer-led product culture? Meaning, devs are often responsible for the UX and flow.

      I remember many years ago we hired a junior developer who just finished his internship at AWS and he showed me the dashboard he shipped all by himself in the summer with no product or designer help. It looked horrible.

      Some devs have a good product/UX sense but the vast majority are horrendously bad at UX.

      My point is that maybe it was intentional, but just bad UX culture.

      Edit: It wasn't intentional

      • grogenaut 20 hours ago

        I get why it's like this.

        Some background. I work at an Amazon sub. This is a good UI for the way we work. We don't spin up a single machine pretty much ever unless it's a cloud dev machine, at which point the price is listed at startup on a custom internal UI. They should consider putting that UI in the ec2 console.

        When I spin up machines I pick an instance class by looking through specs and the price chart and set it via AI into a cdk construct. Usually pick a relatively normal machine type digging through all the ilvarious enterprise discounts (which are not reflectedin the prices in the console). Then as I roll out or when I get resource limit alarms on the fleet I adjust the instance types. Or when accounting asks me about price. In those cases I usually look if it's worth it to optimize.

        The enterprise discounts are a big consideration. Every year new hires make bad decisions because they don't know about the discounts. They wildly affect total cost. Some things are more expensive (lambda first few years), and others are very cheap so we dog food. The console price in no way reflects reality.

        In 15 years we've had about 1k services stood up, around 700 are active. 2000 or total counting tutorials and tests. That means out of an eng org of 500, we've made those decisions maybe 10k times total.

        That's how Amazon thinks about it as well. So yeah I agree that the UI isn't meant to be like one where your spinning up a host. I haven't spun up a single host in like 5 years, but I've made many clusters.

        But that doesn't mean it shouldn't be better to work for a wider audience. Customer obsession and all

      • temp8830 20 hours ago

        AWS UX isn't bad because engineers are bad at UX. It's because inside AWS it's every man for himself, and every team for itself. They don't collaborate, they don't talk, they compete to ship everything as quickly and cheaply as possible - quality, usability, and common sense be damned.

        • AgentOrange1234 18 hours ago

          FWIW, my team in AWS had help from UI designers who were cool people that impressed me with their work. We definitely had to push through some needless organizational friction, e.g., they were in a different org and frequently got left out of meetings, whereas we should really have been acting as one team. I don't think we saw it as everyone for themselves, we really tried to make it work and had a good, trusting relationship.

          In the end, our leadership changed what we were building so often that all of the UI work was scrapped long before we shipped. We ended up launching a janky console, quickly assembled by SDEs who were racing against deadlines. We skipped virtually all operational readiness work to meet the launch deadline. After claiming the launch win, the director, two managers, and the pm promptly left for other orgs.

          • geoduck14 17 hours ago

            Wow. Your story sounds like my company. Makes me feel less bad for the dysfunction I have to work with

        • sithadmin 17 hours ago

          That's not just AWS. That's Amazon generally. All Amazon orgs I've worked for have been like this, and due to the nature of my work, I (and my teams) have been treated like pariahs for daring to suggest that there ought to be even a minimal amount collaboration, shared standards, and cross-pollination on ideas between teams.

        • someguyiguess 18 hours ago
        • whazor 17 hours ago

          AWS UX is bad because there are too many products and features, but also still supported legacy.

      • voncheese 20 hours ago

        > My point is that maybe it was intentional, but just bad UX culture.

        This may be valid, but even if it is someone (or a group of people) at Amazon are violating one of their core leadership principles - Customer Obsession

        https://www.amazon.jobs/content/en/our-workplace/leadership-...

        A useful (and hopefully delightful) UX is key to showing customer obsession.

        That being said, I personally feel the UX at Amazon sucks overall, not just for pricing/packaging but even getting basic shit done. So perhaps Amazon (or at least AWS) doesn't think a good UX is a key ingredient to demonstrating Customer Obsession.

        • aurareturn 19 hours ago

          Maybe their customer obsession culture did not extend to their AWS department?

          AWS services names are notoriously bad at communicating what they actually do: https://expeditedsecurity.com/aws-in-plain-english/

        • gbear605 15 hours ago

          Everywhere on Amazon.com has bad UI/UX. For one example, the flow on checking out as a non-Prime member (not sure about Prime members) is janky and feels straight out of 2005. Like it reloads the page, taking ten+ seconds, every time you enter new data (address, credit card, personal info, etc.). I would be laughed out of the room if I tried to deliver this at work, but Amazon delivers it for millions of people.

          So no, they care zero about their customers, except maybe for getting as much money as possible out of it.

        • hvb2 19 hours ago

          Just saying, you can be customer obsessed and still not have a good feel for UX...

          Ask me how I know

      • torginus 19 hours ago

        Personally I think the UI flow is geared towards the idea that engineers don't really see the costs, they just build stuff and then management pays at the end of the month.

        Often I see something that's supposed to be leaner - like Fargate is leaner than renting a whole server to run docker, right?

        So it's cheaper as well? - Well, no.

        Also if you reach any appreciable level of complexity, you should move to IaC - configuring all that stuff on the UI, and getting it right is torture.

        • loloquwowndueo 19 hours ago

          Engineers are not entirely cost-oblivious entities.

          • aurareturn 19 hours ago

            They're not but if they don't talk to the pricing team, and most devs don't want to talk to business people, they'd never coordinate on where it makes sense to show pricing to customers.

            • loloquwowndueo 16 hours ago

              You didn’t read the comment I replied to, did you? The premise was :

              > the UI flow is geared towards the idea that engineers don't really see the costs, they just build stuff and then management pays at the end of the month.

              So this is about the engineers consuming AWS, not the ones who designed and implemented AWS

        • rowanG077 18 hours ago

          A core part of an engineers job is including thinking about cost in what they do.

          • loloquwowndueo 16 hours ago

            Right - nobody who’s had a formal education in engineering would think that way, because cost considerations are part of the curriculum from the start.

            • torginus 13 hours ago

              I don't think a lot of formal education places teach AWS's resource pricing structure, which can be incredibly confusing, but can be boiled down to: if you want to be as cheap as possible, just use EC2 for everything and maybe S3 for storage.

              • rowanG077 12 hours ago

                I'm very surprised you expect any formal education to teach any specific pricing structure. You teach how to evaluate solutions for their price impact. No one was claiming any curriculum includes AWS's resource pricing structure.

            • decimalenough 14 hours ago

              I can't recall cost ever coming up as a consideration during my years of formal computer science studies in school. Big-O efficiency, sure, but the cost of compute, storage, bandwidth, nope, not once.

              It was absolutely hammered into me in the years of working for startups that followed, though.

              • loloquwowndueo 11 hours ago

                Just noticed you did say computer science, not computer engineering. Two very different things.

      • epistasis 21 hours ago

        I would argue that the intentions don't matter at all, the end result is all that matters both for the buyer and seller. In systems design, it is often said that The Purpose of a System Is What It Does. Good intentions can produce very bad systems with bad outcomes, and neutral/bad intentions can create good systems that benefit everyone.

        I think that applies both to Amazon's dev system and pricing system. From what I hear about the insides, alignment is chaotic neutral inside of Amazon, but that shouldn't affect how we judge the system itself.

      • m463 15 hours ago

        > Some devs have a good product/UX sense but the vast majority are horrendously bad at UX.

        I think the problem is that nobody understands the size of the problem.

        For most tasks, the accomplishment is getting something to work. That takes 90% of the time. But the UI requires polish, working things out, backing out and trying again, and takes the OTHER 90% of the time.

        I remember talking to a friend who worked with apple to port some dvd authoring software. And steve jobs started with the UI, and said "this is what you do". I think it was just a blank screen and you drag your video onto it. the software they were porting was a bunch of windows type confusing nonsense, and they had big changes to make.

        That said, AWS might be a dark pattern. Remember the cable companies that didn't WANT to show the hidden fees? because $29.99 a month was really $71.41?

        • jonhohle 13 hours ago

          Prior to AWS at Amazon hosts were provisioned as “host classes” and typically operated on in that way. We were encouraged to make them “touchless”, which meant the infrastructure team could replace that host without contacting the team first. The deployment tool deployed to host classes (though you could put an individual server there if you wanted). EC2 wasn’t quite the same, but not very foreign either. We didn’t originally even use the AWS interface (at the team level). They were managed by a team working on the transition.

      • ricardobayes 18 hours ago

        It just goes to show if you are the first big player in a space, you can have whatever UX. No UX will override that first mover advantage. I could cite countless examples, where the first commonly known company rules the space and no newcomer with a flashy UX can come close, no matter how hard they try.

      • Y-bar 21 hours ago

        Judging by how much things jump around on the screen when I navigate from one view to another I agree.

      • pokot0 16 hours ago

        Honestly UX became irrelevant in the last years (infra as code) and even more in the last year (coding agents). What you need is well structured API and a CLI that does not limit you. You can call it UX if you want, but the skillset is different.

        When I started my latest project my first rule was: I never have to login to AWS console. I didn’t achieve ‘never’ but I am pretty close and the experience is a lot better

    • ivan_gammel 21 hours ago

      This is the only correct way to do it: choose infrastructure provider that can help you deliver. AWS is good, just not for everyone. It stands somewhere between services like Heroku and bare metal, abstracting a lot of maintenance, but offering some control over scaling architecture. Which means that as a cloud provider it helps to scale, not to build the cheapest and simplest setup possible. If you have VC money and pitch growth, AWS might be a safe choice - 2 years of startup credits they offer via accelerator programs help you not to bother too much about your infra budget and build first 18 months before you start optimizing spending (and then you know it, have good forecasting etc). If you are bootstrapped or indie developer, choose what you can afford and choose something simple. Hetzner, DO etc will work fine.

    • parliament32 21 hours ago

      That's one of the things I like about Azure, they don't overwhelm you with listing prices beside every individual item as you're creating it, but they seem to always present a price on things that could be expensive. It's a good balance, I have yet to be surprised by a charge.

      • tcp_handshaker 21 hours ago

        Using Azure in 2026 should be a firing offense. How many cross-tenant incidents are enough for you? In 20 years of existence of AWS ( since 2006 and S3 ) show me ONE with AWS ... and I will publicly eat my hat here...

        "Azure’s Security Vulnerabilities Are Out of Control" - https://www.lastweekinaws.com/blog/azures_vulnerabilities_ar...

        • bdangubic 20 hours ago

          It should 100% be fireable offense if you have a choice

          • andreashaerter 20 hours ago

            > if you have a choice

            I just read:

            > If I had learned one thing from my past life was that if you see the signs of an abusive relationship, you have the option to walk out, and you don't, all that follows is your own fault.

            so... :)

            • jnovek 19 hours ago

              Regardless of whether the metaphor stands up, this is a horrible thing to say. Abuse victims are not responsible for the abuse they receive.

              • mrmanner 17 hours ago

                I had the same thought at first, but in context I think the quoted text refers to business relationships. Which makes all the difference.

              • reactordev 17 hours ago

                No but they have the power to remove themselves from the situation and should have a prudence to do so. We have places for women and children to go to escape abuse, so find your hideout and escape the abusive relationship.

                • jnovek 15 hours ago

                  Here’s a cool fact: in America only 35% of DV survivors retain full custody of their children and only 45% retain primary custody.

                  If you flee domestic violence you are more likely than not to lose custody of your children to your abuser.

                  • lelanthran 14 hours ago

                    > Here’s a cool fact: in America only 35% of DV survivors retain full custody of their children and only 45% retain primary custody.

                    That's because joint custody is the default and you need to have really good evidence when you want to restrict a kids access to their father.

                    > If you flee domestic violence you are more likely than not to lose custody of your children to your abuser.

                    "Being forced to allow kids to see their father" is, to you, the same as "losing custody of your children"?

                    You're talking absolute horse puckey here. I'm also pretty certain you don't believe it.

                    • dang 14 hours ago

                      Please make your substantive points without calling names or crossing into personal attack.

                      https://news.ycombinator.com/newsguidelines.html

                      • lelanthran 14 hours ago

                        Thank you dang; here's the thing, even the most charitable reading of GPs comment indicates that he feels being unable to restrict a child's right of access to their parent is unfair in some way.

                        No matter what you may think of parents, it is absolutely horrific that someone will argue for restricting the rights of children, and do it in a way that he feels is acceptable in society (custody is only in small part about having access to one's children; the actual right is to the child, not the parent - the child has the right to access to their parent).

                        I wanted to make him understand that trampling over children's rights is not acceptable.

                • joquarky 16 hours ago

                  [flagged]

        • parliament32 17 hours ago

          Azure is a compliance requirement for us.

          Haven't had anything impacting in GovCloud, but if you're not there yet I'm sure there's shenanigans in the consumer version.

      • e40 19 hours ago

        You’ve got to be kidding. Azure is 10x the dumpster fire that AWS is. I have used both.

        • DANmode 19 hours ago

          They didn’t comment on the entirety, just the billing transparency.

    • okeuro49 18 hours ago

      When I log into AWS there is a big graph saying "Cost savings" and offers all the different ways to save money.

      The idea that AWS is abusive seems a bit much to me. There is Amazon Lightsail for people who prefer pay-monthly upfront costs.

      • testbjjl 18 hours ago

        Have you tried to use this feature? From my experience it’s typically reserved instances that provide discounts for longer contracts. It feels a lot like cable TV to me. I think the interface is difficult to use but am able to get what I want from the CLI and some scripts I have aliased.

    • Lucasoato a day ago

      > They were using AWS, so I logged in the account to add a few more machines. Right there, in front of my eyes, were the signs of an adversarial, abusive relationship.

      > The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.

      I don’t want to be the one defending AWS, but I don’t think that this is a valid reason not to like them. I mean, pricing depends on so many factors like reserved/dedicated/spot/on-demand instances have all different prices.

      I don’t even think that using the UI to spin up the machine is the right way to do that in an enterprise setting, you should always do that through Infrastructure as Code, to know exactly what you have up and running, just by looking at that as you would with any program. I’d suggest to use the UI for simple testing, for which the costs are often (but not always) negligible.

      Jeff Bezos if you see this please send me some cash.

      • whateverboat a day ago

        I must disagree so heavily with you here. Prices can depend on so many factors, but.... when that particular account is choosing that particular machine, AWS knows what it will cost, and they can show it to them dynamically. It's very difficult to be convinced in this day and age that you cannot have a dynamic price chart right beside the machine sellector which is showing or calculating prices in real time for that particular product.

        About using IaaC to set-up the infrastructure, sure, but sometimes you just need to browse stuff before actually writing code to get a feel.

      • bulletsvshumans a day ago

        They absolutely could calculate and put the price in the UI if they wished to. Other cloud vendors do.

        • mlhpdx 19 hours ago

          For which services?

          Let’s look at Lambda for a second. Deploying a lambda function to AWS costs literally nothing. And yet, depending on how it’s used, it can cost an infinite amount of money. Which price should it show?

          There are far more sevices like Lambda than EC2.

          • dingaling 14 hours ago

            > Which price should it show?

            "Estimated cost per 1000 invocations"

      • lr1970 a day ago

        > pricing depends on so many factors like reserved/dedicated/spot/on-demand instances have all different prices.

        Or you can have your own negotiated private pricing which is a whole different story in itself.

      • finaard a day ago

        I've been in a similar situation - a surprising amount of companies really just click to create instances. Last time I've encountered that at a customer I improved things a bit by creating templates, and scripting instance creation based on those templates - but ideally we'd have had the templates themselves as well as the network side generated by ansible.

        But that's the problem: The complexity of doing that properly is pretty much the same as just doing your own hardware (which is what I'm working with most of the time - handling stuff on physical servers). And at that point the question should be why you're paying AWS so much money and pay your people to automate AWS workflows when you could just pay them to automate workflows on physical hardware, which would be way cheaper to run than the AWS instances.

      • evilduck 20 hours ago

        > I mean, pricing depends on so many factors like reserved/dedicated/spot/on-demand instances have all different prices.

        If they know how to bill you then they obviously know how to consider and calculate all of these factors, they just choose not to show you up front.

      • lambda 21 hours ago

        Tell me how I can easily determine the price from my IaC deployment as well.

        Heck, I even have a hard time telling the price I pay on an account by account basis; because we have savings plans, those get charged against the root account and then I see $0 spent on EC2 in the individual account because it's all covered with a savings plan.

        And when I'm putting together that IaC and trying to decide which new instance type to upgrade to, I have to dig through multiple confusing interfaces to figure out that what I want is to upgrade from m8a.4xlarge to c8a.8xlarge and how much that is going to cost me.

        • mlhpdx 19 hours ago

          In cost explore, on the right hand side where filters appear, click the “more” button at the bottom then the first option that appears (which is “charge type” or something like that) and select “usage”. That will give you a view of what you’re using regardless of the savings plan, etc.

      • sudosteph 21 hours ago

        I'm with you. Nobody serious uses the UI to make changes with AWS. At the very least, use the AWS CLI. IaaS is the norm though.

        I'm tired of people acting like complex infrastructure tooling is adversarial because it's not completely intuitive. Infrastructure is hard. AWS can give you tooling and docs with patterns to follow, but they can't read your mind. Neither can the PaaS providers - they just make choices on your behalf and hope it won't matter to you.

        • esseph 21 hours ago

          > I'm with you. Nobody serious uses the UI to make changes with AWS.

          This is still hugely prevalent at some of the largest companies in the world

          • mlhpdx 19 hours ago

            Not hugely, perhaps not even prevalent, but present, yes.

            I get to see how a lot of companies use AWS. The console does make its appearances, but less and less often these days.

      • swasheck 21 hours ago

        the pricing “API” is also a joke so it’s not like they have tried pushing people to apis and away from the console.

        i just use vantage (https://instances.vantage.sh/) now. their api is functional and reasonable.

      • richwater a day ago

        The faster people realize AWS hates the need for a UI, the better.

        It should really be a read-only layer for metadata and logs.

        • mlhpdx 19 hours ago

          At this point I’m not using the UX much if ever. Everything I’m doing is via IaC or the CLI. It’s made working with AWS really smooth.

          • DANmode 18 hours ago

            Pedantry: You’re having a smooth UX (or DX) by not using the UI.

    • chuckadams a day ago

      AWS actually has a pretty good price calculator with some decent presets (but FFS, can I have an "uncheck all" button?) but of course it's an entirely separate app. Amazon naturally wants some friction to having this pricing information handy, though I suspect the main reason has to do with Conway's Law: AWS still ships their org chart.

    • mips_avatar 16 hours ago

      They have to do that because their pricing is bad. Like if you want to run a custom vector db you can run it 20x cheaper on hetzner than aws.

    • wodenokoto 12 hours ago

      Almost every organization I’ve worked for has setup their cloud such that:

      A) they are receiving massive discounts off of list prices, and

      B) they’ve setup everything such that no-one working on the cloud can see the spend.

      Companies just really don’t want employees to know what their spend is.

    • ransom1538 15 hours ago

      There will be no greater engineering feat than the ability to set a spending cap. FAANG is filled with brilliant people working alongside the smartest AI of our time, yet somehow the ability to set a spending cap has eluded some of the best engineers on the planet for over a decade.

      Einstein split the atom. Newton explained gravity. Musk can land rockets backwards on floating platforms in the ocean.

      But none of them could answer the ultimate question:

      How do I stop AWS from charging me $47k because someone forgot to turn off a Kubernetes cluster?

    • tcp_handshaker 21 hours ago

      >> The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.

      This is false. The price shows up right away when you select a machine. I dont work for AWS...

      • dekhn 17 hours ago

        I do see prices when I look at the instance selection box. "On-demand linux base pricing: 0.0104 USD per hour" for t3.micro (matching the published pricing). it does not show the full price (based on any additional volumes or other configuration details).

        It gets far more complicated when you have reserved instances, and combine reserved instances with RAM sharing when working in a larger org.

      • nullstyle 21 hours ago

        Not back when i was using AWS

      • Capricorn2481 19 hours ago

        You can upload an image to imgur and show us. I don't see this.

    • hypeatei 21 hours ago

      So this project had a three month timeline and provisioning the cloud resources maybe took an extra hour or two because of crosschecking? I actually prefer the dedicated calculator pages and product pages because it gives you more insight into how things are billed. I think this is a strange thing to get hung up on, IMHO, especially as a lead / manager of developers.

    • cyberax 17 hours ago

      > The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.

      I'm sorry, what? I just tried the EC2 launch wizard, and the price is listed right in the dropdown with the instance types. Or you can open a table with comparisons and enable the price there, along with ~20 other instance type properties.

      Yeah, the AWS UI is not great. But they go out of way to make pricing predictable and public.

    • fragmede 15 hours ago

      > I had to have the two tables open, cross check the specs and price.

      Okay but https://ec2instances.info/ is right there. It's valid to point out that you shouldn't have to do that, but sometimes you just have to live with the relationships you have.

    • zsoltkacsandi a day ago

      I agree with you at some degree, but I would like to point out that AWS pricing is much more complicated that you can calculate how much will you pay from a static number showing up on the UI.

      If it bothers you that you need to open two tabs for cross-checking the costs, you may want to avoid every cloud provider, not just AWS.

      Once you have NAT gateways, CloudFront, S3, auto scaling, loadbalancers, etc, calculating the cost becomes an art rather than an exact science. And if you don't use these, there is no point of using AWS, there are plenty of "cheap" VPS providers.

      • PLenz a day ago

        If they can charge me for it then they can calculate it and show it to me. Anything else is obfuscation.

        • whs 19 hours ago

          I think AWS billing is quite complicated that they probably don't even know what did you get charged specifically for this machine.

          You might have leftover reserve instance that applies, which make the listed price inaccurate. That reservation might even be in a different AWS account in the same organization that you don't have access to. That reservation might not even be there between the time you quote and the time you actually launch it if someone/something did launch before you.

          Your organization might also have discounts. I believe some discount may also be very confidential. For example, my reseller policy is that the customer must not be able to see AWS Billing in the organization root account as supposedly the price in that console are the price AWS charged the reseller, while we pay listed price minus any discount we negotiated ourselves.

          Finally, I suppose they don't want to have prices shown in multiple places as they will need to update it when prices changes. Doesn't want to risk forgetting one place and getting sued for it. You can see that AWS documentations often do not want to mention the price at all, even if that price is currently free.

          Chinese clouds kinda make this simple by making reservation part of the buying machine itself - you have to mark that particular machine as monthly/yearly committed when you start it (or convert it later). The complicated part is recycling instances - if you delete a server before its reservation ends it ends up in a recycle bin that you need to look before making new reservations.

        • zsoltkacsandi a day ago

          They don't know in advance how much bandwidth will you use, how much traffic will you have, what auto-scaling rule will it trigger, etc. It's not obfuscation, it's billing based on your usage. And as with everything in life, there are tradeoffs.

          • eptcyka 21 hours ago

            Give me a slider for bandwidth used or a formula where the variables are abstracted away. If a computer can tell me how much I owe, a computer can be made to show how it came to it’s conclusion.

          • mikeyouse 21 hours ago

            Sure, but providing estimated costs based on reasonable pricing buckets alongside the options to add the new machine is something that every other vendor manages to do..

            • timr 21 hours ago

              ...and AWS does it too. I can go right into an account and see an estimated cost per hour, and even pre-pay at a fixed discount for longer terms if I want to. They tell me right there what it will cost. They do this for everything that is reasonably a "fixed cost", like CPU time.

              They cannot predict what my bandwidth consumption will be, or other such variable costs. For those, they tell you rates.

              • Capricorn2481 20 hours ago

                No it's pretty bad. They show you the cost after all the resources are set up. Even setting up an ec2 instance, a really basic use case that has a fixed cost based on size, you have to go Google it and find their ec2 pricing table. It would take no space to just put the price per hour in the drop-down as you're picking an instance. But no, they obscure it on purpose.

                That's just for ec2. Everything is like this. Super awesome when you're being brought onto a new project and trying to estimate costs for your client. And let's not forget the little tiny things that should cost nothing. A NAT gateway with no redundancy is $30/mo. That's a fun surprise.

                • cyberax 17 hours ago

                  > Even setting up an ec2 instance, a really basic use case that has a fixed cost based on size, you have to go Google it and find their ec2 pricing table

                  This is the "Comparison Table" from the EC2 launch wizard: https://imgur.com/a/YjFhkzb

                  The pricing is right there, along with filtering and sorting.

                  • Capricorn2481 14 hours ago

                    For the record, my original complaint that ec2 did not have pricing in the dropdown seems to be untrue right now, which is great! For the sake of UX discussion, I want to talk about your picture as if that were the only way to get this info. So let me explain why that's bad.

                    The main reason is this is only true for ec2 and every other resource has its own slightly different way of getting the cost, making it really easy to miss things like this. But here are the steps we take to get to your image.

                    - First you click compare instance types, and you're brought to a completely different page with a table.

                    - By default, there is no column for pricing, but two columns for "storage space" even though most of the instance types have these blank.

                    - There's nothing that says you can add columns to this page. You eventually figure out it's the gear icon.

                    - Then you click the gear on the top right to look at column names. You try searching the 44 column names for "price" or "cost" but both of those turn up blank, because there's no fuzzy searching.

                    - So rather than use the search box, you manually scroll through all 44 column names and find pricing at the bottom of the list.

                    This is the definition of out of the way. It's hard to imagine why you would default to showing two different storage columns over the pricing column, when half the instances are blank on storage.

                    Now do FSx, which has no pricing information at all, or any links to pricing information. They have an info tab telling you your backups are incremental, which would make you think they are fairly inexpensive. Not more expensive than the filesystem itself!

                    • cyberax 14 hours ago

                      And to add to this: AWS teams were also quite focused on avoiding surprise bills for customers. Because surprise bills led to customer support interactions, Sev2/3 tickets that needed investigation, etc.

                    • cyberax 14 hours ago

                      This is simply because AWS UI is not made by one team. Each individual team makes their own UI/UX decisions, and things like pricing info just get forgotten and/or scheduled "for later".

                      So they just added a default table widget, and they didn't even bother with customizing it. You can enable the context menu for the table's rows, which works and is empty.

                      I worked at AWS around 6 years ago, and we had a great win with just getting access to a service that provided the full list of available instance types and base prices.

                      This kind of disjointness is both good and bad. It's good in the sense that individual services stay within reasonable complexity, and usually all the functionality is available through the public APIs because the UI console is just another consumer of these APIs. AWS is also very careful with permissions, internal services try to avoid escalating privileges and try to perform everything using the user-visible access policies.

                      But it's bad because integration just sucks, and the UI layer is the ultimate example of this. AWS console _is_ really messy.

          • sillyfluke 21 hours ago

            Now explain why they don't have a killswitch for a user defined spending limit.

            • simondotau 21 hours ago

              IMHO it should be illegal to force consumers to have an infinite spending limit on a post-paid service with consumption charges. If I want to cap my unpaid expenditure at any amount, I should be legally entitled to do so.

            • nostrebored 21 hours ago

              How many real applications actually want this behavior? AWS is not built around hobbyist needs. It’s built around being a platform to run most shapes of production use cases.

              • sillyfluke 19 hours ago

                This has been a feature request since AWS was a thing.

                >AWS is not built around hobbyist needs

                Yes, as if no startup teams are tasked to remain within hard spending targets when they're trying to build a POC with technologies that they are not initially experts in.

              • pixl97 20 hours ago

                I mean, by the number of people that end up with 100,000 charges in a few hours posting on HN, I'd say a lot more than you're giving it credit for.

              • Capricorn2481 19 hours ago

                I don't think companies want their bill run up either.

                • lukeschlather 19 hours ago

                  It's very common for companies to have a $1M/year contract that depends on $100k/year in AWS resources. (and maybe they have 3+ such contracts.) They could lose a contract if their account gets shut down for nonpayment, it's hard to say how much of an overage they would prefer to having their account suspended, but AWS is optimized for these kinds of customers where every dollar spent on hosting drives some multiple of revenue.

            • zsoltkacsandi 21 hours ago

              You can set up cost-based alerts (actual or forecasted) that send notifications via email or SNS. Based on this you can set up automations, such as applying an IAM policy to prevent further resource creation, shut down resources, etc.

              • zsoltkacsandi 18 hours ago

                Interesting to see that some people assumed there are no kill-switch mechanism, and when it turns out they just did not know about it, the (totally valid and factful) comment gets downvoted because it is against their initial assumption. Not what I would have expected on a professional forum.

                • Demiurge 17 hours ago

                  I do not downvote comments when I disagree, and I think it’s better to explain why I would strongly disagree. Downvoting in this case almost reinforces the notion that the downvoted comment makes such a good point that it causes people to give up on the discourse and just smash the panic downvote button. It’s obvious to me why this is not the case for this comment.

                  The suggestion to setup some kind of IAM policy to shut things down and stop resource usage is insanely complicated for users who need this kind of feature the most. If I’m learning AWS and just added my CC to it, I am the last person to be qualified to setup this kind of an alert and policy from scratch. This needs to be a single text input in the billing page, like it is for countless spend-as-you-go services. When the limit is hit, the service needs to stop the usage at the customers peril, because that’s what they customer requests.

                  Hope this helps.

                  • zsoltkacsandi 6 hours ago

                    > The suggestion to setup some kind of IAM policy to shut things down and stop resource usage is insanely complicated for users who need this kind of feature the most.

                    We set this up at my last job like in 10 minutes. Complexity is a matter of perspective, and if your job to do this, you have done this many-many times, and you have ready to use infrastructure as code templates.

                    Yes, AWS is massive, the documentation is huge and makes things inherently complex, but flexible too. You can define what behavior do you want when you exceed your limits. We can argue whether this is obfuscation or complexity or what, but based on my experience AWS optimizes it's product for enterprise-ish companies, that can afford to have SREs who knows exactly what to do in such cases. That is where they have their own training/certification program. For simple use cases there is AWS Lightsail where pricing is simple and easy to understand.

                    But even if it would be insanely complicated, that is a reason to downvote? HN used to be better than this kind of "I don't like your comment, let's downvote it".

          • tomrod 21 hours ago

            Price simulators are fine. They also know the distribution of use. They can do cost plus pricing (many cloud providers do). You're defending deliberately obfuscated pricing when it need not be obfuscated.

            • mlhpdx 20 hours ago

              As I read through these comments I’m thinking about the dynamic range of AWS customers: from my little hobby account to my business account to some hyper-scaler’s account.

              I think about the diversity in usage patterns: from generating giant video stream broadcast somebody trying to calculate yet another digit of pi. It’s wild.

              Is true, probably, that AWS doesn’t know how much anyone’s use case will cost (even when it’s yet another version of something we’ve seen before). Too many variable.

              If only there were some kind of software with a text based, natural language interface that we could ask a question like “how much would it cost to do XYNZ on AWS?”

            • zsoltkacsandi 21 hours ago

              > Price simulators are fine.

              Yes, as long as you do not have seasonal traffic, auto-scaling, spot instances, burstable instances, saving plans, reserved instances, floor/custom pricing, etc. These are tools to optimize your spendings and spend less if you know what you are doing.

              > defending deliberately obfuscated pricing

              A bit contradictory that price simulators are fine, but then the pricing is deliberately obfuscated. Then which one?

      • windexh8er 21 hours ago

        I don't think your comment hits like you think it does. I think your intent with "cheap" implies some level of being lesser. In my experience that is not the reality. Similar to opp I migrated a startup from 5x cost in AWS to DO years ago. In fact that "cheap" competitor was able to give them better performance, more reliability and more features for a lot less.

        AWS is almost never required and almost never the best option. It's the Cisco of options, it's often the default but for no good reason other than someone on the team probably knows enough about AWS to make it work.

        Almost every startup I've worked at has leveraged AWS as their primary but when not they end up using AWS for something. And in every startup there's always contention with AWS spend and all of these startups invest significant time and, funny enough money (via cost savings products or consulting), to reduce their AWS bill. And yet, they never seem to try anything else. Doomed to the cyclical cost savings cycle. Amazon knows this and the UI/UX is designed to keep companies in this money burning loop.

        Finally, AWS isn't a silver bullet. For anyone in us-east-1, you know [0].

        [0] https://mashable.com/article/amazon-web-services-outage-may-...

        • RulerOf 13 hours ago

          > Finally, AWS isn't a silver bullet. For anyone in us-east-1, you know [0].

          I probably should have commented on the original article here, but I pulled all of my company's production infra out of that AZ back in 2019 because AWS dragged its feet for too long deploying 5th gen hardware there.

          I assumed the racks were full or something. I still don't know if they ever did get newer hardware in that AZ—I just avoid it like the plague.

          I had a light chuckle this week when I discovered the work I did out of sheer frustration saved us from a partial outage seven years later.

      • callmeal 19 hours ago

        >If it bothers you that you need to open two tabs for cross-checking the costs, you may want to avoid every cloud provider, not just AWS.

        On Google cloud compute, the ui shows an updated 'cost' as you start building your machine.

      • aljgz 21 hours ago

        Not showing the price was not "my problem". It was the sign of a product packed with traps, footguns and all kind of things that would go wrong and the blame goes to the user.

        No thank you

        • mlhpdx 20 hours ago

          I’m not sure I understand. AWS has detailed pricing information for each service.

          I’ve never felt surprised by pricing. Cost has been surprising, but that happens when usage is surprising in my experience.

          • Capricorn2481 19 hours ago

            In completely unrelated pages to where you setup resources, yes. Ec2 pricing is in a random doc disconnected from the AWS console.

            They absolutely could to you a base price on the ec2 setup page, but they don't. And I have been absolutely surprised by pricing. Services that do almost nothing could cost more than your ec2.

            • nemothekid 18 hours ago

              Respectfully, I think this is more of a use case where you aren't the target audience for a service like AWS.

              I've been working with AWS for nearly 10 years. Many people I know, both small and large, just don't even use the console. If I need to figure out how much a project costs I use the AWS pricing calculator. Having an ec2 pricing on the pricing page is meaningless once you spend any meaningful amount of time in AWS. Once you add discounts and reserved instances, that number is going to be inaccurate anyways.

              If you just need a VPS provider, there are better, less complex options. I find these complaints kind of like stepping into an F1 car and complaining that the F1 car is deceiving you because theres no fuel gauge.

              • Capricorn2481 17 hours ago

                I'm a contractor, so my deployment complexity is whatever my current client's complexity is.

                > If you just need a VPS provider, there are better, less complex options. I find these complaints kind of like stepping into an F1 car and complaining that the F1 car is deceiving you because theres no fuel gauge

                That's fine if you feel that way. The article and following discussion is clearly about the smaller audience, and I think you're underestimating how far up these little problems stack and scale. If a couple grand is a rounding error to you, that's great. Most businesses fall firmly in the place where that would be a problem.

                I think there is a value add for large companies on AWS, but for smaller ones, I don't particularly feel like AWS is an F1 car, more like a self driving Tesla that locks you inside when it's on fire. And I find the cavalier attitude that these companies aren't important enough to add the distinction to be exhausting. AWS is being pushed on everyone.

                • nemothekid 15 hours ago

                  AWS is being pushed on everyone the same way Hadoop was pushed on everyone in the 2010s and IBM in the 90s. Everyone sees themselves as webscale, when their data can reasonable fit in Excel. If the only product on AWS you are using in EC2 and S3, you are choosing the wrong tool.

                  The complexity of AWS is because a service like AWS is complex. Neither Azure or GCP has any less complexity. DigitalOcean offers way less services and as a result is way less complex.

                  >And I find the cavalier attitude that these companies aren't important enough to add the distinction to be exhausting

                  They aren't important in the same way a F1 car doesn't think families are important enough to add a back row seat. No company is going to have fidelity to serve a perfect product to every market. The frustration comes from the misplaced belief that a product should serve every kind of user in the market.

                  • Capricorn2481 15 hours ago

                    > They aren't important in the same way a F1 car doesn't think families are important enough to add a back row seat

                    I don't know of anyone saying you should buy an F1 car for your family, do you?

                    I do see people in this very thread with very different ideas of when AWS makes sense for you.

                    • nemothekid 12 hours ago

                      >I don't know of anyone saying you should buy an F1 car for your family, do you?

                      It's a metaphor. Your clients telling you they need you to deploy on AWS are the kind of people I believe are telling you to buy an F1 car to daily drive to whole foods. You said it yourself: "AWS is being pushed on everyone".

                      >I do see people in this very thread with very different ideas of when AWS makes sense for you.

                      Naturally. However, 99% of, what I believe are illegitimate complaints of AWS (AWS has tons of legitimate complaints), are from people who were probably better served by a using a simple VPS provider than a cloud provider. A VPS provider is simpler, easier to understand wrt to pricing, and cheaper. Most of the complexity in AWS comes from the fact that AWS itself is a very complex tool targeted to large organizations and deployments where people aren't using EC2 instances, or are using 100s of them. The complaint that the UI doesn't have enough affordances when trying to create a single EC2 instance is kind of ridiculous when you consider it's a tool designed for people launching 100s of instances. Nobody is reasonably launching 100 instances through the dashboard. Furthermore, if vendor lock-in is a concern you have AWS is the wrong tool.

                      Likewise for IAM. People complain a lot about IAM. But AWS has thousand different user types, and a 1000 different services. I've written my fair share of permission systems with a fraction of the amount of permutations. They always become complex due to the combinatorial nature. GCP manages to somehow be even worse. But you wouldn't need to deal with something like IAM if you just stuck with a VPS.

            • mlhpdx 19 hours ago

              Not unrelated pages. All AWS pricing is and has been for a very long time posted on predictable pages alongside the service marketing and documentation. The console is the console. I, for one, don’t want to see pricing in the console or in cloudformation or CDK documentation — because if in one then in all, right?

            • Capricorn2481 14 hours ago

              Replying to myself, this is not true anymore, for ec2 at least. I think that my comment was upvoted so much really speaks to how chaotic and inconsistent the UI is, because you get a totally different experience using other services.

              For instance, I don't see any pricing information when setting up an FSx filesystem, even for the size you setup. And there's definitely nothing saying backups will cost you more than storage (even though they are incremental?)

        • zsoltkacsandi 21 hours ago

          > It was the sign of a product packed with traps, footguns and all kind of things that would go wrong and the blame goes to the user.

          I spent 5 years optimizing spendings on AWS at various companies. Yes, it does come with traps and footguns. On the other hand if you know what are you doing, there are plenty of tools to optimize your spendings with RIs, saving plans, auto-scaling, etc, and spend less than the list prices.

          Based on my experience AWS for the companies that can afford to pay surprise bills out of pocket if something goes wrong.

          • epistasis 21 hours ago

            Everything you describe is reinforcing the point of the person you are responding to.

            • zsoltkacsandi 18 hours ago

              Yes, and exactly that is why I started my first comment with this: "I agree with you at some degree".

              I agree with him/her, just shared my more nuanced take, based on my experience coming from my past workplaces.

    • b40d-48b2-979e 18 hours ago

          if you see the signs of an abusive relationship, you have the option to walk out, and you don't, all that follows is your own fault.
      
      This is needlessly victim blaming and reductive. You're ignoring the dynamics of a relationship and how victims of abuse are often financially dependent on their abuser.
      • DANmode 18 hours ago

        > [if] you have the option to walk out, and you don’t

        Ignores nothing, and blames no victim.

        It advises people to avoid becoming one when possible.

        • rmunn 18 hours ago

          "all that follows is your own fault" does blame the victim. The abuse is definitely NOT the victim's fault, it's 100% the fault of the abuser. (Most of the time; I won't say there are never mutually-abusive relationships, but most of the time it's one way).

          Thing about abusive relationships is, though, many (I would go so far as to say "most" but I'm no expert on the numbers) people in one have lots of options to walk out... but they either don't know they can walk out, or they don't feel that they can.

          So telling them it's their own fault for leaving, when they didn't really understand that leaving was an option, does blame them unfairly.

          Now, when the analogy is employee-employer, the "don't feel that they can" so often doesn't apply: the psychological reason for not leaving ("but I love him!") is almost never something the employee feels. But the "I feel trapped" reason (it's the only job I could find that makes nearly the money I need for my mortgage, if I leave then we might lose the house, etc.) VERY often applies.

          EDIT to add this P.S.: I understand the intent of saying that was to advise people "Hey, walk away when you get the chance, otherwise everything that happens to you was 100% avoidable". But saying "it's your fault" is going too far. I've seen people claim that statements purely intended as advice (like "Hey, if you park your car in THAT neighborhood, you might wanna lock your doors and not leave any valuables in sight so nobody smashes your window") are victim-blaming. But it's really, really about the phrasing. The example I gave was definitely NOT victim blaming. Saying "Well, you were asking for it by parking your car there" WOULD be victim-blaming. The way it's phrased is very important. And saying "all that follows is your fault" is most definitely wrongly blaming the victim.

          • lukan 18 hours ago

            If the victim had the option to walk, but did not, then it is his fault for not doing that.

            If I know a dog is dangerous, but try to touch it anyway and get bitten - then yes the evil dog bit me, but it was my fault for not reacting to danger. Same way with a abusive company, if you know they are, but still make a contract because it seems convenient, then it is still a abusive company, but your fault for getting into a relationship with them.

          • DANmode 15 hours ago

            > The abuse is definitely NOT the victim's fault, it's 100% the fault of the abuser.

            At some stage, (regardless of law or what’s right), standing in a pedestrian-crossing on a busy thoroughfare is foolish.

            Keep hanging out in the crosswalk hoping Bezos will stop for you,

            if you want,

            but don’t chastise those warning others to move.

            • rmunn 7 hours ago

              Did you see the P.S. I edited in? I'm not taking objection to "hey, you should move" (or "hey, you probably shouldn't park there if you don't want your car windows smahsed"). It's the specific "it's your fault" phrasing I'm objecting to. It would have been better phrased as "remember, everything that happens afterwards is something you could have avoided". The line can be fuzzy sometimes, and I've definitely seen people throwing around accusations of victim-blaming where such accusations are unwarranted (someone saying "hey, if you're a woman, you'd be wise not to hang out in such-and-such a neighborhood alone after dark" is definitely trying to give advice, not victim-blame, but I've seen something phrased in nearly exactly those terms — I don't recall them verbatim — get unfairly accused of victim-blaming). So I agree with you in many cases. This specific one, though, was phrased as "it's your fault" and I can't agree with that phrasing. It's still the abuser's fault, even if the victim didn't take action to get out of the situation.

              But yes, people in abusive relationships (whether in their personal or professional lives) should be advised to get out of there, and should be helped to do so as best as you can. No qualms with that.

  • mattbillenstein 13 hours ago

    I tend to use a few services on each cloud as possible so it's easy to switch between them; spinning up an Ubuntu VM that's identical on nearly every cloud is a superpower.

    And, so if you keep it simple like this, it's not too complex and the costs are knowable - mostly VM hours and S3 for most of what I run.

    But, the thing I've become increasingly disappointed with is simply the performance. The cpus are _slow_ - being forced to use EBS for a lot of things is _slow_ as hell; and starting/hydrating new VM volumes is super duper slow (have fun paying for fast launch).

    So, for what you pay vs what you get, it's a huge difference, albeit very convenient.

    Increasingly, I think about like racking stuff - like run most of your workload on dedicated hardware somewhere close to an AWS region and then burst into the cloud as needed and just use s3 in that region. Reduced cost, better performance for what matters, and you just pay for hands-on in the datacenter. Send them servers and just manage it all remotely.

  • exabrial 17 hours ago

    We invested in colocation 2022-2024 for non-prod (log aggregation, Gitlab, warehouse databases, analytics loads, etc). Didn't know what kind of savings we accidentally set ourselves up for. Investing 3 months DO and AWS Bills permanently cut our spending, and since then has never seen an increase. If these systems go offline, it's an inconvenience but not a show stopper.

    We intentionally engineer prod so it doesn't rely on any system in the colo (so nothing like 'store our config in git and the apps pull it on startup' type party tricks).

    With memory prices right now it's harder to recommend expanding colocation but it's something every company needs to do (eventually). Not every system you have has equal production value.

  • xmcp123 a day ago

    Something that has always bothered me an outsized amount is Elasticache.

    I will bite the bullet and pay for RDS because it adds a lot of value - scalability, a reasonably optimized config, backups I don’t have to worry about.

    But Elasticache is exploitatively priced with almost no value add.

    It is slower, less optimized, less stable, and only supports one DB compared to a vanilla redis install with zero configuration.

    There are some scalability improvements, but it’s extremely rare they’re even required because vanilla redis so wildly outperforms elasticache on a similar instance.

    • zdc1 20 hours ago

      Elasticache is definitely one of the services to consider self-hosting.

      AWS doesn't add much in terms of APIs or polish. On the other hand, Redis/Valkey is one of the most simple services to self-host.

  • fxtentacle 3 hours ago

    "Maybe one day they will get around to unsuspending my account." is increasingly how support feels with all big cloud companies.

  • h1fra a day ago

    To this day I still don't understand why people love AWS. It's overly complex, full of dark patterns, and not even that good compared to alternatives.

  • rembal a day ago

    +1 on the IAM over engineering, though to AWS credit, I suspect it was evolved rather than design, and that's what you get when evolution has to maintain some level of backward compatibility (think humans still having to be able to lay eggs). Another thing that happens occasionally for saas companies is AWS creating a copy of their product in a bit sus way - but it's not a technical problem, it's a business model problem.

    • kikimora 19 hours ago

      This is unfortunately unavoidable for any system like IAM. All of them evolve into monstrosity because of so many conflicting requirements. Most importantly being simple and tractable on one end and being able to express any imaginable predicate on another.

    • viccis 16 hours ago

      And god help you if you want to use one of their many competing data engineering tools, all of which will be duct taped onto Glue and require not just IAM but also another layer of RBAC on top of IAM. Like you said with IAM, I think it just slowly evolved into the mess it is today, but it's rough. Trying to just run a simple Spark query using an S3 Table Bucket was enough to remind me why Snowflake and Databricks are printing money by making it a more user friendly experience.

  • djinn a day ago

    AWS has been systematically hollowed out of technical staff since 2023. Either through mass layoffs or via 2 cycles of performance improvement plans. Often I find most skilled peers in presales or support are not with AWS whilst the ones with most ambiguous work history have been retained at promoted.

    Use AWS at your own risk, Paul Vixie is not there to save you.

  • continueops_com 2 hours ago

    Had a similar one. They switched off Lambda and SNS because of a potential credential leak — none had actually leaked — and I was without service for 48 hours. Same flavour as the post: the provider's heuristic was probably right to fire, but you only find out which of your things were load-bearing once they're gone.

  • anymouse123456 18 hours ago

    Like OP, I was an AWS booster for many years (also a Heroku lover), but fell out of love about 10 years ago for the same reasons.

    - It felt like far too much complexity just to do simple things.

    - The obvious attempts to trap customers with slightly incompatible, higher level services felt gross

    - The inability to run AWS trash on a dev machine had a MASSIVE hit on productivity

    - Pricing didn't fall as fast as I felt it should (an obviously debatable position that reasonable, smart folks disagree with)

    In my current company, we've been running basic SMB/tech startup functions on-prem (ACK! THE HORROR!) from ~6 basic computers (4 game machines and 2 nucs) for a few years now.

    We just reconstituted the entire infra working part-time over about 2 weeks using Claude code and ansible.

    It really doesn't make sense in this world to pay tens of thousands of dollars to rent a level of computation that can be purchased and managed for a tiny fraction of that money.

    We're also seeing massive dividends paying out with this architecture because we have self-hosted gitea, along with a local workstation for our agents to run in, and now our agents have all of the context without us relying on Github or ingress/egress fees at all.

    [edited for formatting only]

    • oneneptune 16 hours ago

      The value in paying someone is if you have enterprise requirements for physical data security. Then after that if you go the Hetzner route, you have to micromanage your underlying OS, Redis, DB, etc... and it's just more work.. and if you're in enterprise business it reduces friction a lot to just pay someone a trivial amount like $10,000 a month.

  • mavsman 18 hours ago

    This title sounds like an employment experience post: "returned to" and "left" both pretty strongly insinuate joining AWS and leaving employment, not simply using it as a customer.

  • zmmmmm 10 hours ago

    I am curious the effect AI will have on these cloud offerings.

    On the one hand, they bust through a bunch of the pain points of setting it up and configuring it. Especially if you are trying to do it using something like Terraform etc. So they make it more accessible.

    But on the other hand, they equally reduce the pain of building all the premium part of the offering yourself. Why do I need AWS ECS / ALB / autoscaling etc etc services if I can get all that configured on bare metal just as easy now?

    So in a different scenario all the lock in and premium services wither away and it all reduces to commodity compute - in some sense, where it should never have left. Initially experienced joy as the bitter battles I fought with Terraform became smooth prompts I issued to have Claude deal with all my problems. Life has gotten much better. But I'm now definitely moving into frustration because it's clear that AWS is mostly a middleman causing friction now across a whole set of infra that I could be managing directly. So I'm paying for the privilege of all this frustration. Why?

    I don't know at the moment which way this will go, but I'm quite curious about it.

  • thomas_witt 5 hours ago

    It's hard to take the author seriously technically competent when reading "DynamoDB what a hot pile of garbage. I tried it and ended up with a $75USD bill by the end of the day."

    Clearly there are many people - including me - who built highly scalable, available and near maintenance-free systems using DynamoDB for a ridiculously low cost.

    I have no idea how you can actually burn more than $5 in development for DDB. If you don't make the effort to explore what a technology is built for and/or clearly didn't understand it, maybe you should holding back ranting about it. Unless you want to look like a fool.

    Same goes for IAM. It's complex but still easily understandable to get the basics. Creating e.g. a rule where you can only read from a DynamoDB table but not delete entries or the whole table takes you under 10 clicks.

  • sqircles 21 hours ago

    The billing footguns are a major pain point for anyone that doesn't have the capital to just dump faith paired with a credit card into. This of course is not limited to AWS...

  • imrozim 3 hours ago

    Setting up AWS for my startup felt like a full time job gave up and just used a supabse. AWS complexity is real even for similar stuff.

  • regularfry 13 hours ago

    Can report that exiting from Lambda to something more sane (like, say, a django task or api endpoint or something) is now pronounced "hey copilot, look in that directory and implement precisely the same functionality over here". Or thereabouts. A whole lot of things suddenly look a lot less locked in.

  • amluto 21 hours ago

    My current favorites on AWS, in no particular order:

    1. IAM and policies. I’m not convinced that anyone knows how IAM rules and policy rules interact. There’s a flow chart that appears to be incomplete. There is not obviously a complete enough spec that one could, say, write a test suite to confirm that the actual behavior follows the spec. LLMs, of course, don’t know either because the training data does not exist.

    2. Utter nonsense pricing. The cost of listing an S3 bucket goes up by an order of magnitude if you set the default storage class to archive despite this having nothing whatsoever to do with the operation in question. (But GCS adds two orders of magnitude for the same offense.) Conclusion: NEVER EVER set your default storage class to an archive tier.

    3. Boto. It’s an Unbelievable Piece Of Crap. It’s not a library at all — it’s a meta-library that generates itself at runtime because someone had fun doing that and because Python didn’t stop them. Python type checkers, of course, just give up. And Boto is, um, a community project that AWS claims not to care about. Which is, of course, why its maintainers refused to fix an interop bug with GCS (I fully documented the entire bug for them, and the fix would have been the removal of a bit of pointless code).

    4. Egress pricing. And the way it multiplies if you use any advanced VPC features. Why on Earth is it cheaper to sent an object to S3 from my own machine than to send the same object to the same endpoint from within a different AWS region nearby?

    5. Authentication. It’s so bad that they invented Identity Center to try to unsuck it. But if you use Identity Center you get logged out even while actively using the console, and you get a helpful link to the WRONG PLACE to sign back in. Because of course core AWS isn’t even aware that Identity Center exists.

    I don’t even use AWS very much. I’m sure I would fall in love with more of it if I did.

  • gchamonlive 15 hours ago

    > AWS Lambda - yeah I really bought the sell on this - "its scalable!!!!", and I ignored the slow startup times, the MASSIVE development complexity.

    I don't know... Maybe I spent too much time studying how to tame AWS using IaC and gitops reproducible deployments, but AWS lambda seemed to me the most impressively simple and inexpensive product. Once I did an complete project, from end to end, designing the architecture and flow of multiple lambdas communicating with each other through SQS queues to search, extract and load info from geotiff files from S3 into a PostgreSQL database, and it was really straightforward.

    If you leverage docker images for deployment and separate the interface for treating lambda requests from the core logic, it doesn't have much space for surprises.

    If the author went with the clichĂŠ that lambda scalability can harm your budget, it wouldn't be original, but at least it would have been plausible, but complex? I don't know, maybe someone could present the case with more deails for why it's so complex.

  • finaard a day ago

    > My business email system still does not work.

    This is always the weird things in those rants. He's complaining that after 4 days his mails are offline.

    Now I'm doing a mix of physical servers in rented rackspace, and rented servers - but even there I can have billing mixups where they deactivate servers for no good reason. And to get email working again the limiting factor would be the DNS TTL - new servers would be online somewhere else within hours of it going down. (And yes, I tested that just last year - one hoster threatened cutoff due to non-payment on a paid invoice, which prompted me to move the mail server just in case while getting this resolved).

    • somewhatgoated a day ago

      I don’t get your point, what is the weird thing?

      That he is complaining about his email being down or that he trusted AWS at all with email?

      • rkent a day ago

        The only way that email is down for days for a competent sysadmin is if their DNS is also with AWS, so I assumed that was the case. Assuming that is true, what is weird to me is that, after deciding he hated AWS and left it, that he still kept his business DNS (the most important service there is) with AWS.

        • hluska 21 hours ago

          If you would have read the article, you would know that the writer had DNS hosted at AWS, would have read why he made that choice and would know of his plans to migrate off.

          • finaard 21 hours ago

            I assumed he just had DNS at AWS, but after re-reading I guess he has DNS _and_ domain registrations at AWS, which would be a special kind of stupid. That's something we were advising customers against already back when cloud wasn't a thing yet to enable fast transfers when stuff goes south

            (to clarify: DNS+domain at the same service can be OK, as long as you have nothing else there. As soon as you start having other stuff, keep the DNS there, but move the domain registration away. Depending on which domain make sure you have auth keys, access to the admin domain or whatever would enable moving the domain without registrar cooperation. In my hosting days I did my fair share of emergency transfers and infrastructure to help companies get their basics online again after a SNAFU - totally doable to have first mail coming in again within a working day)

    • panny a day ago

      >new servers would be online somewhere else within hours of it going down

      Yeah, no that's not how it works with email. You have to build reputation for weeks or receivers throttle you.

      • kassner 20 hours ago

        It is pretty much unacceptable to have a domain bouncing emails, so I’d be out of the provider before the MX TTL even expires.

        For outgoing emails, reputation is a huge issue, but at the same time it’s also fairly trivial to set up a (different) 3rd-party (gmail, outlook, sendgrid, whatever) with previous reputation so you can get back communicating.

      • finaard 21 hours ago

        I'm not running a spam business. I've been operating my own mailservers (and related infrastructure) for more than 25 years now, without issues.

  • simonebrunozzi 2 hours ago

    Hmm... I was very early AWS, and it might even be "the guy from the US" that spoke at the first AWS Melbourne event was me, back in 2010.

    I agree with few of the things that have annoyed Andrew Stuart, and brought him to leave. I disagree with a few. Let's pick one: DynamoDB was brilliant. I even knew one of the key engineers behind it, Stefano Stefani, as brilliant as he was hilariously funny as a person. It solved large scale problems beautifully, much better than SimpleDB or a combination of that and S3 would be able to do.

    But I really disagree with one thing:

    > And recently I went back to AWS. WHAT?!?!? WHY? You might ask. To get some research done. Do a few tests, get in and out.

    I would never trust a person doing this, and would never hire him/her ever.

    • andrewstuart 8 minutes ago

      No it wasn’t you Simone. I don’t think you had joined AWS at this time. It was a nice guy I think named Mike. He has since passed away.

      I think you misunderstood the blog post. I was never employed by AWS, I was a customer.

  • bbbflgllglhlld 5 hours ago

    These fake open source rugpull companies deserve what Bezos did to them.

    They made hosting their software hard, intentionally.

    For example, prohibiting more than one node/replica, and being hostile to PRs/features that they consider their ”commercial offering”.

    But the worst thing I’ve seen, for many software, is probably the hostility towards people who want to automate the software, for example putting the software in a container (10 years ago), then they refused to give support even if you had a valid paid contract.

  • joefourier a day ago

    > Cloud computing was an absolutely mind blowing revolution - suddenly your startup could run its own computer systems in minutes without need to install and run your own systems in a data center. This was an absolute game changer, and I really drank the AWS Kool Aid down to every last drop then I licked out the cup. I was all in on AWS in a big way.

    Am I the only one who remembers that VPSes and dedicated hosting services were a thing before AWS came around? Yes you had to pay for a month at a time and scaling wasn’t as instant, but it wasn’t like the only option before cloud computing was having to drive to the datacentre and install your own server.

    • tiffanyh a day ago

      > suddenly your startup could run its own computer systems in minutes without need to install and run your own systems in a data center.

      The “in minutes” is doing a lot of the work in that sentence above.

      I also used dedicated servers in the late ’90s (and they still offer great value today). But before AWS, provisioning new hardware typically took days, not minutes.

      AWS changed that, and the rest of the industry eventually followed.

      • reliablereason a day ago

        No you could rent virtualised servers way before AWS. AWS simply had good marketing.

        The virtualised server thing was not a AWS thing, the thing that was were their other services. For example instead of renting a virtual server and installing a database on it. You could rent the database; that was sort of a new thing that AWS made in to thing.

        It was never cheaper what you paid for was a promise of fire and forget. You would no longer need to worry about any responsibility to update the server or the database cause the AWS crew took care of that.

      • joefourier a day ago

        > I also used dedicated servers in the late ’90s (and they still offer great value today). But before AWS, provisioning new hardware typically took days, not minutes.

        VPSes and non-custom configs for dedicated servers were pretty instant as far as I know, I think the advantage of AWS was more that you could scale up and down much more easily since you weren’t locked down in a monthly contract, and that you could automate server provisioning through an API.

    • _puk a day ago

      If you recall AWS didn't scale instantly originally either.

      We had super bursty traffic, and had to go with Google Cloud (very early days! [0]) because you'd need to communicate with AWS and pre-warm the ELB capacity of your expected bursts.

      We did a dead launch to 60 million customers (0 to 60 million, no organic growth phase) this way. I wouldn't want to do that on a VPS.

      [0] https://cloudplatform.googleblog.com/2013/11/?m=1

    • rglover a day ago

      Not first, but it was the first with a planet-scale marketing budget.

      I miss the Media Temple days.

    • flomo a day ago

      Am I the only one who remembers how shady a lot of those VPS/hosting companies were? Seemed to be a race to the bottom, so a 'good' outfit might suck or completely disappear a couple years later. (Also, pricing was all over the map, I had a client who was paying $150/mo for a VPS.) Hetzner survived, but for a long time they had a reputation as spamfarm. So I get the initial appeal of AWS, used tactically. But for larger companies, its something like IBM or Oracle, if you are price-sensitive, it's not for you.

  • wg0 19 hours ago

    I think one big decision AWS could have taken earlier is that of declarative medium for cloud resources. Cloud formation is not human friendly as JSON or YAML. Problem with terraform has been that it had to keep track of the state separately which AWS already had (like what resources have been provisioned against a particular account number) in their databases and further more, I NEVER liked HCL, it never made sense to me.

    Otherwise, some things that are good about AWS are as under:

    1. IAM is I think good, logical and granular enough.

    2. Separation of compute and storage in EC2 is very good.

    3. S3 is amazing.

    4. SQS is heavily underrated.

    5. RDS is expensive but too good. I do not know how to go about 1 TB+ database size with daily backups without RDS. Similar ZFS setup with file system snapshots is complicated.

    Not good things about AWS:

    1. Super expensive. About 10 times. With zero support.

    2. Current geopolitical environment would suggest getting off AWS if you are not a US company. The fascist idiots at the helm of affairs have lower IQ than the big void's average temperature in outer space.

    EDIT: Typo + Formatting

  • aetherspawn 9 hours ago

    The one thing that’s good about Microsoft is the support. You can get someone on the phone in less than an hour and they can actually fix the thing.

    That’s why it’s so far been hard to go past Outlook Plan 1 for (big scale) email hosting.

    Completely agree about AWS, and we use Cloudflare now, but the jury is kind of out on whether CF is largely going the same way.

    • trollbridge 9 hours ago

      Yes. I feel confident recommending Microsoft 365 to clients because I know if something goes wrong it can actually get fixed.

      I lack that confidence with Amazon or Google support.

  • morpheuskafka a day ago

    > I am reminded why I left AWS and how I need to finish the job, get off AWS Workmail, move my domains from Route53 and never return.

    Well, besides for the fact that the author's got suspended for no reason, WorkMail is being shut down March 2027 anyway. I recommend checking out Purelymail for a budget, batteries included option. Another option is to run your own server but have it use something like AWS SES to send externally, avoiding the IP reputation issue.

  • torginus 19 hours ago

    Imo lambdas are super cool, and the best way to have a no-headache fast-iteration time deployment service.

    What most people realize, that you don't have to go microservice or fragment your code to a billion little repos, you could take a standard webserver, and move it to lambda, as long as you don't expect requests to be able to share on-server state.

    • kikimora 19 hours ago

      I second that. I use lambda as on demand server with one lambda handling entire web app.

    • viccis 16 hours ago

      I agree. Web service hosted on Lambda that, for long running async tasks, uses FIFO SQS (optionally by way of FIFO SNS) connected to the task runner Lambda. Easy. It's not hard to deploy like OP claims. Build a Docker image, toss it in ECR, and use AWS CDK to do infra. Done.

    • petesergeant 17 hours ago

      What is lambda adding in this situation that throwing a docker image up somewhere isn't?

      • torginus 13 hours ago

        Well, complexity and iteration time for example. For docker, you need something that runs it (like your own cluster of EC2 or ECS), a private registry you push to, a separate user that provisions the actual server, while iteration (code changes, or something as simple as changing an env var) - involves the cycle of upload new image, shut down the old containers, try to start up the new ones, with all sort of weird failure cases, like if your container depends on a dockerhub image like alpine, you can run into a ratelimit scenario with dockerhub,as AWS is too cheap to pay for dockerhub access, and doesnt have their own mirror, so your containers may fail to start unless you explicitly mirror your base image.

        Then you have to take care to update your images etc.

        All this stuff, like ECS also lives in a subnet, so you have to manage routing, and public accessibility and stuff - its legit crazy amout of work compared to either lambda or just running stuff on a virtual machine.

        Imo it's the worst of all worlds.

  • dzonga a day ago

    the A.I (LLM) merchants will tell you - that AI is now writing software (agentic coding they call it ) - yet one they can't bill you properly or have a broken billing mechanism.

    their dashboards are trash & don't work - Google Cloud, AWS Console, Google Ads, Meta Ad manager

    I won't even mention the hyped up LLM vendors.

    but here we r - people being laid off due to A.I - money being funneled into Gigawatt datacenters

    • mcherm a day ago

      I don't think that's the real issue. The problems with billing and dashboards at cloud vendors are not new within the past few years, they have existed far longer than the LLM coding.

    • owebmaster a day ago

      The billing "problems" these companies have are working fine for them as they are there to increase revenue, not to improve user experience.

  • cmiles8 a day ago

    There was a time when AWS was truly innovative, but it’s long since transformed into Amazon’s cash cow and is behaving like such.

    Innovation has ground to a halt of mostly just meh “hey us too” launches. Pricing and design patterns feel increasingly focused on locking you in. AWS folks tell me internally they talk a lot about making sure things are “sticky” with customers. The best engineering talent no longer wants to work there and it shows, especially in places like AI where AWS has just released wave after wave of discombobulated nonsense.

    As a core “rent-a-server” concept with a few add on services there’s still a lot of utility, but AWS is gradually becoming a boring baseline utility with a ton of distracting half baked stuff jammed on top. Most companies I talk to are no longer focused on single cloud and increasingly are bringing a lot of workloads back on prem or in colos. Not everything, but for a lot of stuff that just makes more sense and is a heck of a lot cheaper.

    The chips business in Annapurna is probably the most interesting thing and that plays to its strength of the boring low level infrastructure stuff. Nearly everything AWS tries to do beyond chips and rent-a-server plays is a hot mess.

    AWS isn’t going away, but its future looks a lot less exciting and inspiring than the story that got us to this point.

  • psanford 21 hours ago

    > If you're using AWS Lambda then you have to work to keep convincing yourself this is better than your own web servers. Keep convincing yourself that using AWS Lambda is not a horrible mistake.

    lol ok. I have ~50 lambdas running in my personal aws account. Some of them are webservers running behind an api gateway or using a lambda function url to expose them to the internet. Some are running on a schedule, some are triggered from s3 events. The cost to run these for me is less than the cost of the cheapest vps (my total requests per month stay under the free tier limit). There is also zero maintenance I need to do for these functions (ok, this year I did have to find-replace al2 to al.2023 in my terraform config). I don't have to worry about making sure the os is patched for the latest vulnerabilities. And I don't have to worry about the specific hardware my code is running on at any time. Doing maintenance for old projects sucks. It is great to have servers I deployed years ago continue to chug along without me needing to think about it.

    Now, all of my lambdas are written in Go and I suspect if I was using one of the manged runtime libraries I would find the language upgrades to be quite annoying. Go also helps quite a lot with cold start times.

    Then again maybe I have just drank the koolaid. In my quest to use lambdas for as much as I can as cheaply as I can, I made a library[0] to use sqlite on top of s3 (not just readonly). It uses the sqlite session extension plus s3 compare-and-swap to allow you to write updates safely to s3, even if you have concurrent writers.

    [0]: https://github.com/psanford/s3db

    • kees99 21 hours ago

      > The cost to run these for me is less than the cost of the cheapest vps (my total requests per month stay under the free tier limit).

      I don't think this is a valid argument. Free-tier VPS do exist also.

      On the other hand, if you don't trust unattended-upgrades [0], and prefer to spend time poking package manager manually (while at the same time considering that time an expense) - sure, that's a strong argument in favour of using lambda.

      [0] https://ubuntu.com/server/docs/how-to/software/automatic-upd...

      • psanford 21 hours ago

        I do not trust a free-tier VPS with my data.

        • nicce 18 hours ago

          How is trust model different with lambdas?

    • Neikius 21 hours ago

      As you yourself said. Your load is so light you keep it in free tier. Their entire business model is for them to capture you while your load is light and then when you scale the price goes up.

      • psanford 21 hours ago

        I have also used lambda at scale in professional environments. I would not use a lambda for a webserver at scale, but having an s3 object trigger processing via a lambda function is a really nice flow.

      • sofixa 21 hours ago

        But the price per unit of measurement goes down, so no, it's not "their entire business model".

    • noprocrasted 19 hours ago

      You can also put these lambdas on a shared hosting provider as CGIs and get the exact same experience.

  • andai a day ago

    At last my quest to find the stooge has come to a bitter end!

    I saw some 192 core instances on Vultr, but I haven't tried them yet. What are you doing with all them cores?

    I often fantasized about spinning up hundreds of nodes for various projects that needed number crunching. Then realized "wait I can just rent one big box for an hour" haha. It's really cool that we can do that now.

    • andrewstuart a day ago

      >> 192 cores What are you doing with all them cores?

      The ancient forgotten art of Vertical Scaling.

      • rglover a day ago

        It's remarkably zen and effective.

  • mchl-mumo 14 hours ago

    I looked up DigitalOcean and it looks like a good alternative. What downsides should I be aware of, from those who have made the switch from AWS?

  • VerifiedReports 16 hours ago

    I left because of their shitty or nonexistent documentation AND absurd complexity.

    After wrestling with their garbage for weeks, we started over and built a VPS from scratch. Development and deployment proceeded without a hitch after that. The only vestige remaining was S3.

    I'm in the midst of a new project now, and I'm not even considering Amazon, even for S3 this time. I'm going to use an S3-compatible layer just in case, but I don't want to give Amazon a dime anymore.

  • Canada 18 hours ago

    And because we have fallen for the convenience we have lost a lot of alternatives.

    The old rack/cage way was less convenient, but it came with a respect for our anatomy to run our stuff that I really miss.

    • jerhewet 14 hours ago

      > anatomy

      Autonomy?

      [autocorrect strikes again! :-)]

  • recursive-call 15 hours ago

    I was in a team that used aws once for their quantum computers. we had $100 of api credits. while still trying to get the code to work, we somehow used all of them, and then it didn’t even alert us that we were out of credits and just spent an additional $100 that we didn’t have. I would not touch this system again with a 10 foot pole…

  • nuker 4 hours ago

    > Complexity! Complexity!!

    This guy needs supabase or heroku or similar.

  • faangguyindia a day ago

    Why do people even bother with cloud?

    I’ve a couple of apps doing a few million a day. I am using Hetzner and before that used DigitalOcean. Mind you, for close to a decade.

    People are unnecessarily complicating stuff, and these clouds can go very expensive very quickly.

    Recently, I came across a company and they were spending $20k a month on GCP. I am like, are you kidding me, $20K for the kind of stuff you do??? It seems you do not understand how CPU, RAM and Disk work to plaster such "autoscaling hyper solutions" burning money in cloud.

    I moved their stuff out of the GCP managed solution and ended up with a $200-400 per month bill. The CEO can still not believe how it's even possible.

    I suggested them move to Dedicated servers but they didn't want it, they said they must show they are on Hyperscaling cloud.

    OK fine, we'll stay in Hyperscaler but not use any of their service other than VMs.

    They racked up a ton of bills by using cloud monitoring, Datastore, and autoscalers (with no proper tuning), Kubernetes.

    I replaced all of it with Prometheus, Grafana, Loki, and most stuff from Datastore to Postgres and Mongo with replicas. I added Redis.

    I implemented a custom scaler where you can scale off of app metrics, not by just using a random peg on CPU.

    I implement hot data reload by packing the data updates in gzip file, uploading to GCS and pulling from autoscaled units. Moved the stuff to Spot VMs.

    The complexity of stuff in cloud is high for nothing.

    • maccard a day ago

      At my previous startup: because AWS gave us a bunch of credits and helped us design the infra. It meant we ran for free what they designed for free.

      At a previous bigger company, getting procurement to sign up to a new provider requires writing a business case, justifying the spend and then getting multiple competing quotes and speaking to their sales teams. Signing up to a new service takes _months_ even for $10/mo as they’ll negotiate for bulk discounts and the best possible terms for something that will literally cost less per year than one of meetings they hold to discuss the “value”. Meanwhile on AWS I can click a button in the marketplace and it gets thrown in the AWS account which is pre approved spending.

      • hibikir a day ago

        Many a big company migrated because they have those very same slow procurement problems with internal data centers. I saw multiple cloud migrations because internal friction was at a level that the price didn't matter: 6 months for the smallest VM kind of thing. Very adversarial relationships, often with very poor incentives, as the service setup costs for other business units were way inflated, but then the maintenance costs didn't pay enough. Paying 3x-4x more a year for just a semblance of reliability was seen as a big plus.

      • misir a day ago

        At my current team at a “bigcorp” I have noticed a similar pattern. We use aws not because it’s efficient in any way.

        We use it because we don’t want to deal with slow procurement process. It kills all the momentum.

        • maccard 19 hours ago

          Exactly. I want to set up elastic search - I can either have procurement go through their sales, or be up and running via the marketplace in less time than it would take me to fill in the RFQ form to send to procurement.

      • xmcp123 a day ago

        Have seen this repeatedly also.

        Watched one company end up with a $250k AWS bill when their credits expired (which they could not pay).

        • maccard a day ago

          If you let it go that far then you were going to blow it one way or another - it’s not an excuse to totally ignore the cloud spend but it’s a n excuse to defer it to a later date. If your successful, fix it, if your not then AWS aren’t getting paid anyway!

          • xmcp123 21 hours ago

            Yeah, they had an impossible to use number of credits (YC) until they expired, so every problem became a AWS solution.

            As an example, they needed a lot of proxy servers. Instead of just using a proxy service, there was a fleet of ec2 instances.

            • maccard 14 hours ago

              If all you have is AWS credits suddenly every problem looks like EC2.

    • edg5000 a day ago

      I think AWS is liked is because when AWS started, being able to get a new VPS up in minutes was still quite unusual. Many hosts would require about 24hr, I suspect, for getting a new VM up. At least those are some experiences I had. But nowaways, they are probably many options for getting a VM instantly.

      I agree that it's overcomplicated. Although having the self-service portal also for assigning IPs is useful. But most of it seems overkill. Although, being able to detach storage from VMs and such is also quite flexible. But still.

      • maccard a day ago

        It’s flexible but slow. we ran our C++ CI/CD on AWS at a previous company, and we used spot instances with volumes attached and detached dynamically. The performance was absolutely abysmal because in effect you’re running compilation across a networked file system, no matter what AWS says your throughput is.

        Our 64 core spot instances on windows were taking 8-10x longer than our developer machines with the same core count, and there was a bunch of engineering went into the scaling, queue management, etc. if we’d just had a single bare metal machine from hetzner we could have saved money _and_ reduced our iteration times.

    • goosejuice a day ago

      > spending $20k a month on GCP

      > burning money in cloud

      I suspect there's two reasons why this happens.

      One is just the disassociation with opex that seems ever present in the VC model. The other is that many startups settle in on a ops solution before hiring ops and the cost of switching isn't that attractive until they're faced with a dwindling runway and a down round.

      • esseph 21 hours ago

        > many startups settle in on a ops solution before hiring ops

        Sounds expensive

        • nijave 20 hours ago

          Not really. It's cheaper than hiring an IT admin and sysadmin for a while.

          Those tend to be tricky hires on the small end since you tend to want jack-of-all-trades who either demands a premium salary or doesn't exist.

          When you have 10 software engineers, having 1 dedicate 10-20% of their time is cheaper than hiring 1-2 FTEs that aren't writing code.

          • noprocrasted 19 hours ago

            Unless we're talking actual PaaS (Heroku, Render, Railway, etc), the cloud also needs a dedicated skillset, so "cloud" doesn't remove the need from a sysadmin.

            If you can get (and trust they do it right) developers to do AWS or Kubernetes, you should be able to trust them to do conventional Linux sysadmin on a bunch of dedicated boxes.

            • goosejuice 17 hours ago

              I suspect you're either severely underestimating what the cloud offers or thinking of a very narrow set of software businesses.

              A full stack/backed dev is more than capable of learning both, but one of those has way more foot guns than the other.

    • noprocrasted 19 hours ago

      > they said they must show they are on Hyperscaling cloud.

      This is the main reason; and it applies to developers (they need cloud buzzwords on their resume), it applies to managers (who in turn hire only those with said buzzwords) and it applies to company execs/CTOs who can brag about the complex (self-inflicted) problems their company is solving at the next cloud provider conference, so they can justify yet another VC round.

      Run this for over a decade, and you'll end up in a situation where an entire generation of "engineers" is no longer capable of configuring a Linux box to serve some basic webapp and will make up whatever reasons to avoid even attempting to do so.

    • nijave 20 hours ago

      It's fairly easy to setup services without worrying about pages.

      I can stand something up on AWS in a couple hours and be fairly confident it will run reliably (assuming their service offering is actually decent--some suck)

      We test backups and they never fail. Metrics and logs always work.

      >People are unnecessarily complicating stuff, and these clouds can go very expensive very quickly.

      I don't think that's the cloud vendors fault. They make it easy to stand up new services so people get overly enthusiastic and create convoluted architectures. Have Postgres but need full text search? OpenSearch is just a few clicks (well hopefully IaC config..) away, let's use that! When you're building yourself and need to setup the stack, instrument, monitor, configure backups the cost is high enough where you say "hey, maybe pg fts is fine for now"

    • okdood64 9 hours ago

      > I replaced all of it with Prometheus, Grafana, Loki, and most stuff from Datastore to Postgres and Mongo with replicas. I added Redis.

      But now you need staffing/headcount to be experts in, and maintain, upgrade and be oncall for this stuff?

    • puelocesar 17 hours ago

      We are having this dilemma right now. I’m not involved in devops, but it’s quite annoying on how slow everything related to our backend is.

      I think we spend around 5k in aws, and I’m pretty sure we could be much more performant for a fraction of that price.

      The problem is, who is going to setup everything? Hiring someone would for sure cost more than 5k

      • noprocrasted 9 hours ago

        If you can get that 5k/month down to let's say 1k, that's a saving of 48k over the course of a year. You can get a consultant/freelancer for half that sum that'll happily do it for you.

    • andrewstuart a day ago

      I worked for a startup company - the founders were really nice people and had put their own money in - quite a lot of money - to get the software built for the vision they had.

      By the time I joined, 18 months after development had started, a giant, complex, hideously tentacled software beast had been built that used every possible AWS service that the massive offshore team of developers could find to use.

      It should have been built on a single Linux box by a single senior developer with Python and Postgres or nodejs or Ruby or whatever.

      They went out of business after not too long and I couldn't help wondering if things might have been different if they hadn't spent a fortune building a giant money making machine for AWS, instead of making a web application on a Linux box.

      Every AWS project I have worked on has had some significant work put into programming AWS instead of writing business functionality.

      • cube00 a day ago

        > hideously tentacled software beast had been built that used every possible AWS service that the massive offshore team of developers could find to use

        To be fair, if they had a AWS Solution Architect involved they heavily push you down this road and if they manage to get in management's ear they'll push the idea that server-less AWS features is vastly cheaper.

        If you're only responding to a handful of requests that's true, but once things ramp up you get "nickel and dimed" for everything: API Gateway requests, lambda execution time, DynamoDB read/write units, CloudWatch logs, outgoing data, step function transitions, S3 requests.

        I understand all those services cost money and they shouldn't be free, but I question if paying all those micro-transactions is worse then paying for your own VMs, especially once your customers complain about the cold starts and you think you can fix it with "lambda warming"

        • maccard a day ago

          To be fair that’s an AWS problem not a lambda problem. If you replace lambda with EC2 the only thing you save in is lambda and step functions(and maybe api gateway but now you need to pay for a load balancer or a public IP), the rest you need to pay for anyway.

    • kriz9 a day ago

      The ease of getting things set up quickly and usually for free when starting up is very tempting. Later, migration is usually considered risky and not worth it because of maintenance overhead - which I would argue has become very easy.

    • stickfigure 21 hours ago

      Grafana (and especially Loki) is hot garbage compared to what you get out of the box in GCP. I'm in a Grafana organization today and the sheer amount of developer and devops time it wastes is mind boggling.

      You moved something from a single datastore to three different database technologies? I don't know your domain, but that sure doesn't sound like a complexity reduction.

      • faangguyindia 21 hours ago

        >You moved something from a single datastore to three different database technologies? I don't know your domain, but that sure doesn't sound like a complexity reduction.

        what's bad about graphana? it's simply used for some alerts and monitoring, i've used it for really long time and it has never failed me not even once.

        it's much simpler to query postgres or mongo compared to duplicating data dozens of times on datastore.

        • stickfigure 5 hours ago

          The UX is dreadful? I could spend hours picking apart the terrible design decisions. After many years with the comparative luxury of the stackdriver tools on GCP, I moved to a BigCorp with a grafana/loki/mimir system and a whole devops army to maintain it. For two years I've been unable to find anything positive to say about this experience. Our devops folks are super smart, so I can only conclude that the software sucks.

          I can't really judge your database choices; I don't know your specific problems, we're just trading quips on the internet for fun. But man oh man grafana is disappointing.

    • MagicMoonlight a day ago

      This isn’t a like for like comparison though, is it.

      You removed all of their logging and all of their redundancy and reliability and replaced it with shitters that will all explode if the small providers one data centre goes down.

      And if someone penetrates this mega server, they’ll be able to wipe all your logs or tamper with them, to hide the attack.

      If your storage servers go down, everything they have is gone. And these providers don’t offer the finest hardware. How do you know all of those drives aren’t from the same batch? They will be, because they’re a bulk buyer with a single data centre.

      • faangguyindia 21 hours ago

        >You removed all of their logging and all of their redundancy and reliability and replaced it with shitters that will all explode if the small providers one data centre goes down.

        they'll never need it, a misconfiguration on those service ends up costing several grands.

        >If your storage servers go down, everything they have is gone

        It’s just logs for an app server, not some banking critical info that will cause a panic if lost. Most of what they are using for logging is for finding some errors, not for mission-critical things which must not be lost.

      • esseph 21 hours ago

        > How do you know all of those drives aren’t from the same batch?

        Because it's explicitly something you can request when doing your server order from your vendor. In this particular case several years ago, Nutanix did good.

    • atemerev 20 hours ago

      Credits. It wouldn't make sense without free credits. And when you are hooked, good luck in moving out.

  • geoffbp a day ago

    Slightly different but related topic - for people who work with people vibe coding, what is the easiest way to allow that for non tech users (and reducing risk)? AWS or something like vercel? Coolify?

    • sudosteph a day ago

      I'm old and bitter about this, but you're not reducing risk by going with PaaS, you're just outsourcing it. That recent "My AI Agent deleted my prod DB" story was only possible because the PaaS they were using allowed for 1-click permanent delete. At least AWS has a "prevent accidental termination" checkbox.

      Nobody wants to hear this, but as things stand, there's no escaping risk for vibe coders right now. Personally, I think AWS is still a good choice for the long run, but don't make the mistake of thinking current LLMs will actually be able to manage the environment on par with a decent infra engineer. That's one of their weaker areas right now. Good news is there are million managed service providers and AWS-competent humans still in existence. Also Premium Support is a good resource.

      Whatever you do, make a lot of backups and store them on a different service somewhere. Then if you get to a situation where you need to do something with sensitive data, or need to raise money, engage with someone who can do a proper review.

    • _puk a day ago

      Vercel and supabase seems to be the norm around here.

      DX is simple, integrations between the two, and the stack is well understood by the LLM.

      Lovable uses supabase, and is surprisingly easy to eject from too; I've done the lovable to Vercel + supabase a couple of times, even managing to keep it syncing via the Git integration. You can get proper scalable infra and minimal vendor lock in whilst the vibe coder gets to play with the pretty.

  • rglover a day ago

    You can accomplish a lot by just having a basic knowledge of Linux sysadmin. I was clueless and then learned some systemd-and-curl-fu. Will never forget the "holy sh*t, this is deceptively simple" moment. A bit more research and I found that beyond convenience and specialty APIs, you really just don't need a lot of this stuff to run a healthy system (since reducing absolute cloud dependence, my reliability has gone through the roof).

    • sandruso a day ago

      100%. I'm not really sure why we all agreed that deployment is somehow the hardest thing that you need to outsource when setting the linux server is one the richest experience you can get and it will pay dividents forever.

      • noprocrasted 19 hours ago

        There's a multibillion dollar industry that lives only because they managed to successfully convince an entire generation of "engineers" to become helpless and not be able to serve an HTTP response using their own hardware even if their life depended on it.

        • BirAdam 18 hours ago

          It’s easy to convince developers of a thing if it starts with: “you don’t need to learn X anymore”

      • pdimitar 20 hours ago

        Stop with this "we" cliche already, please. I never agreed to it and I'm in the profession for 24 years. You don't speak for me.

        Executives will always prefer to transfer liability and responsibility to someplace else.

        Who's calling the shots in an organization? Engineers or executives?

    • gtowey 17 hours ago

      Just wait until you learn about system tools like perf, gdb, bpf -- the amount of low-level detailed information you can get about running processes means you'll reduce the amount of guesswork involved with troubleshooting or performance optimization to a minimum.

    • da02 21 hours ago

      How do you deal with the few minutes of downtime when you do kernel/OS/software upgrades?

      • juahan 21 hours ago

        I’m pretty sure for most systems that does not matter in the slightest.

      • rglover 20 hours ago

        Depending on the deployment and any SLAs, I either don't worry about it (just do a late night rollout when nobody is on the system) or rely on my deployment architecture's sibling checks (I can see when a given machine is still versioning and requeue subsequent rollouts to other machines).

      • tekla 21 hours ago

        How is this an issue in a world where load balancers exist? I was part of a Unicorn that ran prod on 8 boxes and literally never had customer facing outages due to infrastructure updates.

      • BirAdam 18 hours ago

        You put nginx or Haproxy in front of the hosts, drop the one that needs maint from the pool, and re-add once it’s ready.

      • andoando 18 hours ago

        You spin up a second host and load balance

  • nicman23 6 hours ago

    i mostly used the ec2 instances and the workflow to make a simple gpu instance from the terminal (terraform) is terrible in contrast to ie gcp.

    from the UI it is even worse

    also how it is so slowwwwwww

  • hand2note 18 hours ago

    Almost everything is true about Azure as well, especially obscure pricing and complexity in absolutely everything.

  • dakiol 19 hours ago

    When are we gonna start hearing the same stories about Anthropic/Openai/etc? The whole AI thing kinda smells like the early days of AWS: everyone was getting onboarded, but later realized they'd built up a pretty big dependency that's not easy to shake

  • mt_ 20 hours ago

    The well architected frameworks tells you to have separate accounts, your fault that you "tested" in a production environment. https://imgur.com/a/Smal9fL

  • eluded7 a day ago

    I'd tend to agree with the author. If forced to choose a cloud platform though (and that often is the case) then AWS is probably the best of the bunch in terms of reliability. Have heard and experienced some real horror stories with Azure & GCP by comparison.

    • tietjens 4 hours ago

      I would read a GCP horror story for fun. Any thing you can direct me to?

    • te_chris 21 hours ago

      GCP is miles better. Their IAM is at least understandable, for a start.

  • hhh 20 hours ago

    IAM is my favorite part of AWS.

  • sbinnee a day ago

    I also tried. Only service I use is s3 for personal backup. I pay around 15 cents per month.

  • alde a day ago

    The set of core services on AWS remains amazing: EC2, S3, IAM, EKS, Route53, RDS etc.

    AWS IAM is extremely well designed when you compare it with the spaghetti monster IAM systems of other clouds.

    Every time I try the new cool thing supposed to replace these services on some other provider - I understand how mature and polished the AWS ones are.

    With that said, the rest 90% of AWS services like WorkMail, Cognito, API Gateway, are absolute hot garbage which no good meaning AWS expert will touch with a 10 meter stick.

    • nijave 20 hours ago

      >AWS IAM is extremely well designed

      Agree, so is STS and SDKs generally just work. I don't miss on-prem companies with legacy Auth where you maintained 100 service accounts for everything with very careful password vaulting and credentials management policies. So much easier to use IAM policies.

      >are absolute hot garbage

      I kind of like Cognito but both Cognito and especially API Gateway are somewhat convoluted to configure. They seem to work fine once you have them setup right, tho.

      Talking about hot garbage... Not a fan of Redshift and Lake Formation at all. We switched to Snowflake, saved money, got better performance, and had a simpler setup. Really there was nothing about Redshift that was better. We're billed through Marketplace so there's not even a consolidated billing upside.

      Imo Redshift is a relic of the past and has failed to modernize.

  • cantalopes 3 hours ago

    sorry to hijack this but since we're talking the AWS egress exorbitant prices, could someone recommend a reliable production-tested s3 service that will not ruin me financially? i have been with hetzner and while their vps server are great, they are absolutely terrible at s3. random downtimes, incomplete uploads/downloads, capacity issues, terrible api key management, etc - but the failure rate is really high

  • znpy a day ago

    > Of course I do not pay for premium support, so I have to wait the 24 hours that they said it would take them to reply. It's 3 days and AWS support has not replied.

    The writing has been on the wall for a few years now, and this is particularly evident to those thar have worked at AWS: Amazon is in its day-2 era.

    Amazon being in its day-2 era means that most of what has been written in the past twenty years about Amazon is bot valid anymore.

    “Customer obsession” is literally their first leadership principle, and stellar support was their defining characteristic.

    • sudosteph 21 hours ago

      I worked for AWS Premium Support over a decade ago. Waiting 3 days or more, for a non-premium support customer would have been not unusual back then either. But at least the quality of response was typically pretty good, haven't seen what it's like lately.

      They've always struggled to hire for those roles. The people who are best at Engineering Support also tend to be the people who move on to other roles after a year or two.

    • nijave 20 hours ago

      Imo AWS support is pretty decent compared to other vendors. Azure was by far the worst I've ever dealt with--engineers becoming adversarial and passing blame in tickets without actually resolving anything.

      I've had some run-ins with poor AWS support and have even gotten bill credits/refunds when they offered incorrect advice. It really depends on the service but in general they're fairly good.

      • znpy 20 hours ago

        > Imo AWS support is pretty decent compared to other vendors.

        yep, that's the point.

        it used to be stellar on its own, now it's only good when you compare it to other vendors.

  • stuaxo 3 hours ago

    The whole model of slicing the app into so many pieces (proprietary wrappers it seems like) to make more opportunities for billing is terrible.

  • cynicalsecurity a day ago

    Preach, brother.

  • maptime 21 hours ago

    How lambda is as bad as it is I have no clue. Not a lover of azure, but azure functions is such a nicer experience

  • dangoodmanUT a day ago

    GCP would be perfect if they didn't have a history of randomly dropping quotas on startups, causing them downtime

    • squirrellous a day ago

      What do you find appealing about GCP? I occasionally hear positive sentiment like this but don’t entirely understand the reason, mostly because I haven’t used non-GCP clouds professionally. Is it just the least bad of all the big clouds?

  • stiray 7 hours ago

    Amazon is bad at their blocking of accounts. They blocked mine for no reason (want me to call some USA phone number which I wont) a few years back and I was writing down everything that I would buy at their store, but I bought it elsewhere.

    They have lost 3785.90 euros in sales due to their idiotic anti-user war.

    Not to mention of all bad reputation that I gave them.

  • mlhpdx 20 hours ago

    I’m not sure how someone can be an “AWS Fanboy”, drink in all the promise, and think IAM is evil. As far as I can tell it is the one glorious thing that separates AWS from others. IAM is the core that makes it sane.

    • Cyph0n 20 hours ago

      I may be biased, but I find that GCP’s IAM story is simply way better.

      • mlhpdx 18 hours ago

        I find it really difficult to discuss the two with people because of the overloaded terminology.

  • stevepotter 20 hours ago

    I was such a fan of it that I ended up working there for 4 years. Now I avoid it and encourage others to do the same.

    AWS used to have a nifty tool called "policy analyzer" or something that monitored for permissions used by a role so you could scope it down. The other day I had the need for it and when I went to use it, found out they charge something like $9/resource. So I would pay $45/month for metadata monitoring on just 5 things? Nuts. If they knew how to build truly delightful products, they would make something like a role that starts with broad permissions and automatically scopes itself down after some point. And it would be free or at least really cheap.

    DDB is hardly a database. The only reason I can think of to use it is for massive amounts of data whose schema and query patterns are guaranteed to almost never change, which is very rare. Need to sort data on a field? Then you have to create a 'secondary index', which is a copy of the table that they charge you for and that is not strongly consistent. Schema change? Good luck with that. And don't you dare ask to use a nice ORM library. But hey it's serverless.

    Here's a good one: you stop an EC2 instance and its volume keeps running and you pay for that. If you detach the volume, you still pay. There is no way to 'archive' an instance. And the only way I found out about that was I got hit with a big bill for those volumes with the charge labeled 'EC2 - Other' lol. Not very 'customer obsessed' to me.

    My gripes are clearly not important to them because this is old stuff. So all I can do is go somewhere else, which is fine with me

    • goostavos 20 hours ago

      DDB has two use cases:

      1. You need an "infinitely scalable" key/value store and have deep pockets[0]

      2. you work at AWS and your deployment pipeline has so many stages and regions and fabrics that you can no longer even conceptualize what it means for there to be a "current version" of your software (the hell in which I live).

      But for some awful reason it's sold as a general purpose "NoSQL Database." Pair that with the Pavlovian response developers have to the word "scale" and you've got an army of people using the worst possible tech for their usecase. Everyone eventually pairs DDB with Elastic whenever "Oh, wait, so we need to be able to query our data?" hits.

      [0] And you ONLY need PK reads. Querying turns "infinite scale" into "infinite throttles."

      • stevepotter 11 hours ago

        Agree. I didn’t like being forced to use it. There was some edict based on some different past problems. My service was a devops thing and didn’t really have a data plane. A regular db would have been perfect but would have required some silly high level approval we weren’t willing to get. All that despite being told service teams are free to build how they want

  • high_byte 20 hours ago

    similar thing happened to me. I'm not a heavy aws user but wanted to setup some s3 buckets few days ago but my account was suspended for the same reason

    but unlike OP I just accepted this fate and moved away from aws :)

  • aeagentic 20 hours ago

    But is there a better one with same IaaC and API completeness?

  • _wire_ 2 days ago

    I love you baby, I need you! I'd never cheat on you! Come back!

    Hey good lookin'

    • renticulous a day ago

      Looks like a blogpost written to get attention and resolve his personal problem.

  • bironran 18 hours ago

    GCP has it's own share of issues.

    ...

    I was writing a long vent about GCP but the mix of issues we had, just in the last few weeks, was too identifying and I don't want to sour an already tremulous relationship, as much as I'd like to spill it all here.

    Let's just sum it up with resource crunch and degraded services because, apparently, when one customer signs a $200B deal [1] all the "just a few $10M" get thrown to the wayside.

    AWS is also affected. Time to go to Azure? I never thought I'd say those words.

    [1] https://www.engadget.com/2165585/anthropic-reportedly-agrees...

    • vp4nkov 17 hours ago

      Azure was affected by this already - small compute quota increases tickets I had opened last year took months to resolve, and they also took away ability to provision postgres in eastus entirely (we are under quota limit).

  • hhthrowaway1230 21 hours ago
  • pbgcp2026 8 hours ago

    ... skills issue. LOL. Good luck to making these tests easier on GCP / Azure. Or try to set up equivalent HW server without pulling your hair. Complexity is large part of why we, IT people, are paid those money. (Should've created *new AWS account* for those tests. AWS's automated security algorithms almost certainly flagged this abnormal behavior as a "suspected security breach" to protect the (long time dormant) user from potentially devastating unauthorized charges)

  • tonymet 18 hours ago

    Every single complaint has a simple fix: “just use EC2”

    • BirAdam 18 hours ago

      That’s what all of AWS’s inf is anyway. Each instance of a thing is just a premade EC2 instance (I am exaggerating but not by too much).

      • tonymet 12 hours ago

        Ec2 + open source app = 5x vcpu and iops billing

  • raverbashing 19 hours ago

    > IAM - the hideously complex auth and access rules system - this was invented by Lucifer sitting on his burning throne in the ninth level of Hell as the worst possible torment for those who have been sent below for using AWS.

    Perfect explanation - no notes

    I don't think I remember anything so over-engineered and confusing in recent times (probably SELinux now that I think of it).

    And I understand - we kinda need the complexity for what they intend to do but they do need a Come To Jesus moment here to make the Insane Asylum Machine make a bit more sense for mortals

    2nd most annoying thing? Boto3 lib, where conventions don't matter and Pythonic is just a suggestion and the thing works more like a REST wrapper than anything else over a not-great API (please why tell me there's an S3 API and an S3Obj API)

  • atemerev 20 hours ago

    As if there's any alternative. Azure? That mess of everything smashed on top of each other that looks like it was vibecoded in a few month by hundreds of people at once, except that they looked like this from the very beginning where there was no AI? The one that makes you fill docx forms to enable quotas for some services? Or Google Cloud, which _looks_ like it might be simpler, but it has permissions for permissions to enable permissions, and endless micromanagement? I am trying things, but I always return to AWS :(

  • calmbonsai 16 hours ago

    The AWS UI should be, effectively, read-only for any infrastructure aside from setting up some initial roles and perms to manage all of it through an IAC system.

    Put more bluntly, if you're using the AWS Console to spin-up/spin-down service instances you're doing it wrong.

    • alasano 14 hours ago

      I've never enjoyed AWS more than with LLMs managing infra as code through sst.dev

  • fafa09 18 hours ago

    Interesting take on the migration.

  • xrd 20 hours ago

    There is one fortunate result that will come from the SaaSpocalypse combining with Mythos (color me skeptical but let's assume it is as powerful as Anthropic tells CIOs).

    If anyone can clone any SaaS, then there will be millions of SaaS that offer all the features you need.

    How will you choose?

    AWS and Microsoft (and all the big clouds) make it easy for their customers to get hacked, and Mythos makes it more likely the cadence will only intensify.

    But, if I vibe code a hosting service which is pure rust and doesn't use any external libraries and never open sources my code, my attack surface is much smaller and I only have three customers anyway.

    Hackers are lazy and will go for the pond where the most fish live. AWS will always have a lot of marks and a lot of holes.

    AWS will be expensive because you are paying the tax they have to add to fend off the hordes. It'll be an intelligent choice to avoid working with Rome and find a little village in Bergen.

  • lowbloodsugar 18 hours ago

    When it comes to email: “Don’t shit where you eat” is the closest analogy I can think of. Have your email somewhere else, it on any service that might decide to lock your account for any reason. Have your domain ownership somewhere else. And have already, or have a plan already, to move that somewhere if for some reason your email provider gets pissed at you.

  • h4kunamata a day ago

    AWS AIM is hot garbage, GCP might not be the coolest kid of the block but its AIM rocks.

    AWS CLI??? Holy guacamole, what a mess. AWS CLI looks what is now the digital identification to get the basics done.

    While GCP CLI is like "sure, here"!

    • cube00 a day ago

      It's a shame GCP's console and their CLI are both so painfully slow.

      You're also putting your business at risk with Google randomly banning accounts and not providing timely appeals. [1]

      [1]: https://news.ycombinator.com/item?id=45798827

      • vrick a day ago

        I mean this article is about AWS doing the exact same thing.

    • liveoneggs a day ago

      it's funny how being used to something makes it easier to use

  • fnord77 20 hours ago

    Are the other two big providers any different?

  • fHr 21 hours ago

    >works for AWS >quits the BS >needs more money >sells himself as a meat puppeteer once again to AWS >big bs corpo is still the same surprisedpikatchu.jpeg ok

  • tootie 21 hours ago

    This belies the fact that AWS is so far in the lead in cloud market share and even host so much of Anthropic's business. If you dabble, it's confusing. If you're an enterprise with a lot of expertise then it's indispensable.

  • MagicMoonlight a day ago

    These complaints are very weak.

    Lambda is incredibly simple to use, it just runs a function for you.

    Not sure how you could burn so much with dynamodb. It’s serverless and incredibly cheap. Must have been doing something insane like a huge dataset where you scan through it over and over.

    Being salty that Gary couldn’t sell enough of his paid service and AWS is competing with it isn’t a meaningful complaint. I want something in AWS, not on Gary’s servers.

    • jerhewet 21 hours ago

      I wish our Lambdas would spin up faster, but otherwise I've been very happy with them over the past six years. We seldom run over the free tier limits, and when we do we get a bill for a couple of dollars. Dead simple to code for, dead simple to spin up a new instance or scale an instance if we need to.

  • waterTanuki 12 hours ago

    anyone building with AWS or any cloud provider should be setting up a forked pipeline for their data production: one goes to the cloud provider DB used for production, the other is a local-on prem DB you regularly backup and always have on hand should you need to leave the cloud.

  • fafa09 18 hours ago

    same pain here

  • AIorNot 16 hours ago

    Yup -like the honesty

    Why don't cloud services have reviews like amazon products.- I’m so tired of enterprise sales speak in corporate docs- just cut the BS and do straight talk

    If you haven’t used a service you shouldn’t have to search reddit for dev experience on it

  • themafia 15 hours ago

    > Somewhere in the depths of AWS some sort of security alarm had been triggered probably by the fact that my mostly dormant account suddenly started doing stuff with an expensive computer.

    You mean the alarm that shows up with a notification bell in your console? Why not just post that?

    > I am dreading having to "request quota" to be allowed to do that.

    Why? It works fine. I've done it several times.

    > IAM - the hideously complex auth and access rules system - this was invented by Lucifer sitting on his burning throne in the ninth level of Hell as the worst possible torment for those who have been sent below for using AWS.

    It's literally a JSON policy document.

    > - once I noticed the complexity of IAM I could not unsee the complexity everywhere in AWS.

    All our policy actions are scripted at this point. You specify what functions the lambda calls and it builds the policy for you, sends it to IAM, and attaches it to the lambda.

    Everytime I see someone complain about AWS I'm left wondering "did you read _any_ of the documentation? If you just want a linux server then run that but if you want out of the hassle of managing one then you need to learn just a _handful_ of new tricks."

    If half the effort of complaining about AWS was spent reading documentation then most of these articles would never be published.