I have an open source project and started receiving a lot of security vulnerability reports in the last few months. A lot of them are extremely corner cases, but there were some legit ones. They're all fixed now. Closed source software won't receive any reports, but it will be exploited with AI. So I definitely agree with the message of this article.
> Closed source software won't receive any reports
Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates.
Closed source companies can (and should!) also run their own security audits rather than passively waiting for volunteers to spend their tokens on it.
Those bug bounty programs now have to compete against the market for 0-days. I suppose they always did, but it seems the economics have changed in the favour of the bad actors - at least from my uninformed standpoint.
That still exists in the OSS world too, having your code out there is no panacea. I think we'll see a real swarm of security issues across the board, but I would expect the OSS world to fare better (perhaps after a painful period).
Of course everyone should do their own due diligence, but my point is mostly that open source will have many more eyes and more effort put into it, both by owners, but also community.
+1, at this point all companies need to be continuously testing their whole stack. The dumb scanners are now a thing of the past, the second your site goes live it will get slammed by the latest AI hackers
Iāve recently set up nightly automated pentest for my open-source project. Iām considering starting to publish these reports as proof of security posture.
If the cost of security audit becomes marginal, it would seem reasonable to expect projects to publish results of such audits frequently.
Thereās probably a quite hefty backlog of medium- and low-severity issues in existing projects for maintainers to suffer through first though.
I don't follow. It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders. In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits, all whilst at least blocking the easiest method of finding zero-days - that is, being open source.
This really just seems like Strix marketing. Which is totally fair, but let's be reasonable here, any open-source business stands to lose way more by continuing to be open-source vs. relying on the benevolence of people scanning their code for them.
> It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders.
Actually the opposite is obvious - the comment you replied too talked about an abundance of good Samaritan reports - it's strange to speculate on some nebulous "gain" when responding to facts about more then enough reports concerning open source code.
> In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits
That's one good Samaritan for a closed source app vs many for an open source one. Open source wins again.
> any open-source business stands to lose way more
That doesn't make any sense - why would it lose more when it has many more good Samaritans working for it for free?
You seem to forget that the number of vulnerabilities in a certain app is finite, an open source app will reach a secure status much faster than a closed source one, in addition to also gaining from shorter time to market.
In fact, open source will soon be much better and more capable due to new and developing technological and organizational advancements which are next to impossible to happen under a closed source regime.
Some users might be tech sensitive and have the capacity to check the codebase
If a company want to use your platform, it can run an audit with its own staff
These are people really concerned about the code, not "good samaritans"
A new user is much more likely to scan the codebase and report vulnerabilities so they can be fixed than illegally exploit them since most people aren't criminals
> Closed source software won't receive any reports, but it will be exploited with AI
How so? AI won't have access to the source code. In some cases AI may have access to deployed binaries (if your business deploys binaries) but I am not aware that it has the same capabilities against compiled code than source code.
But in a SAAS world, all AI has access to is your API. It might be still be up to no good but surely you will be several orders of magnitude less exposed than with access to source code.
Claude is already shockingly good at reverse engineering. Try it ā it's really a step change. It has infinite patience which was always the limited resource in decompiling/deobfuscating most software.
Which models have you had good luck with when working with ghidra?
I analyze crash dumps for a Windows application. I haven't had much luck using Claude, OpenAI, or Google models when working with WinDbg. None of the models are very good at assembly and don't seem to be able to remember the details of different calling conventions or even how some of the registers are typically used. They are all pretty good at helping me navigate WinDbg though.
absolutely agree with you if we're talking about clean room reverse engineering; but in the context of finding vulnerabilities it's a completely different story
Assembly is still source code so really it comes down to if the copy protection is obscuring the executable code to the point where the LLM is not able to retrieve it on its own. And if it can't someone motivated could give it the extra help it needs to start tracing how outside inputs gets handled by the application.
Yes exactly! I'm so glad I took this route with my startup. We can't bury our heads in the sand and think the vulnerabilities don't exist just because we don't know about them.
Exactly. I respect their decision to go closed source if that's what they need to do to make it a viable business, but just be honest about it. Don't make up some excuse around security and open source.
separating codebase and leaving 'cal.diy' for hobbyists is pretty much the classic open-core path. the community phase is over and they need to protect their enterprise revenue.
blaming AI scanners is just really convenient PR cover for a normal license change.
I literally have a Claude Code skill called "/delib" that takes takes in any nodejs project/library and converts it to a dependency-less project only using the standard library.
It started as a what-if joke, but it's turned out to be amazing. So yeah, npmjs.com is just reference site for me now, and node_modules stays tiny.
And the output is honestly superior. I end up with smaller projects, clean code, and a huge suite of property-based tests from the refactor process. And it's fully automatic.
This is part of it for sure. It is also true that many open source business depended on it not being worth the trouble to figure out the hosting setup, ops etc, and the code. Typical open source businesses also make a practice of running a few features back on the public repo.
Now I can take an open source repo and just add the missing features, fix the bugs, deploy in a few hours. The value of integration and bug-fixing when the code is available is now a single capable dev for a few hours, instead of an internal team. The calculus is completely different.
1) Pulls you in with a catchy title, that at first glance seems like a dunk on Cal.com (whatever that is).
2) Takes the "we understand your pain" approach to empathize w/ Cal.com, so you feel like you're on the good vibes side.
3) Provides a genuine response to the actual problem Cal.com is dealing with. Something you can't dismiss out of hand.
4) But in the end of the day, the response aligns perfectly with the product they're promoting (a click away to the homepage!)
This mix of genuine ideas and marketing is quite potent. Not saying this is all bad or anything, just found it a bit funny. The mixed-up-ness is the point!
Is it good marketing though? I mean personally I do not use AI, and I don't think this opinion of mine will change. I can't look into the future, but right now I don't use nor do I depend on AI. I guess it may work for some people, but even then I am unsure whether that is really good marketing. Riding on a hype train (which AI right now still is) is indeed easier, so that has to be considered.
> Security testing has to become an automated, integral part of the CI/CD pipeline. When a developer opens a pull request, an AI agent should immediately attempt to exploit it. When infrastructure changes, an AI should autonomously validate the new attack surface. You do not beat automated attackers by turning off the lights; you beat them by running better automation on the inside.
This feels like the core of the article, but it doesnāt prove the need for open source.
I wonder whether cal actually has concerns about security (in which case, they're wrong, this argument was false when people made it decades ago), or whether they just took a convenient excuse to do something they wanted to do anyway because Open Source SaaS businesses are hard.
Pretty overreaching claim about another company's internal decisions and open source in general. There is a lot of incentive to stop open source these days.
One of which I am experiencing right now is somebody just copying my repo, not crediting me, didn't even try to change the README. It's pretty discouraging.
The other is security reasons, the premise that volunteers will report vulnerabilities really matter if you are big enough for small portion of people to dedicate themselves, for the most part people take open source tool use it and then forget about it, they only want stuff fixed.
Lastly, open source development kinda sucks so far. I'v been working on a few different tools and the amount of trolling and just bad faith actors I had to deal with is exhausting. On top of that there is a constant stream of people just demanding stuff to be fixed quickly.
I'm hopeful the article is right about its prediction, although I'm under the impression the attacker/defender dynamic is asymmetric and the defender on the loosing end. I hope someone can proof me wrong though...
Making the assumption that the same amount of money needed to attack a critical vulnerability is also required to find and fix it.
Lets say we have a project with 100 modules, and it costs us $100 000 to check these modules for vulnerabilities. What is stopping an attacker from spending the same amount of money to scan, lets say 10 modules but this time with 10x the number of tokens per module than the defender had when hardening the software?
Isnāt the real danger now not the ability to find security vulnerabilities, but rather, the ability of anyone to ask an LLM agent to rewrite your open source project in another language and thus work around whatever license your project has?
This is happening quite a lot actually. People just feed an existing project into their agent harness and have it regenerate more or less the same with a few tweaks and then they publish it.
I'm not sure how this works in the legal sense. A human could ostensibly study an existing project and then rewrite it from scratch. The original work's license shouldn't apply as long as code wasn't copy & pasted, right?
What happens when an automated tool does the same? It's basically just a complicated copy & paste job.
A lot of open source projects already have licenses that allow forking and selling the fork, it hasn't been a problem most of the time... there's a lot more to operating open source as a business beyond just shipping the code
Open Source was always open to "many eyes" in theory exposing itself to zero-day vulnerabilities. But the "many eyes" go for the good and the bad actors.
As far as I am concerned... Way to go Cal.com, and a good reminder to never use your services.
Closing your source doesn't close your attack surface,it just closes the
community that would have helped you defend it. Security through obscurity
is a kind of tradeoff, not a strategy.. i mean that's what I feel.
At the same time, I heavily support open-source and contribute a lot, but I can't necessarily agree that security-through-obfuscation doesn't play a major role in slowing down attacks. Cloudflare have based its whole security being closed-source (for example on its anti-bot mechanism) to be hard to reverse engineer, and they remain leaders as of today with few serious security breaches.
Some things just can't be truly secure as well, ddos protection is mostly a guessing/preventive game, exposing your firewall config/scripts will make you more vulnerable than NOT.
If your codebase isn't exposed, attackers are constrained by the network and other external restrictions which greatly reduce the number of possible trials, even with a swarm of residential proxies, it's not the same at all from inspecting a codebase in depth with thousand of agents and all models.
It's a good question - is blackbox hacking as effective as whitebox hacking, for AI agents? I've gotta assume someone at Anthropic is putting together an eval as we speak.
I don't really know, but I have a story which might prompt some conversation about it.
At $WORK we had a system which, if you traced its logic, could not possibly experience the bug we were seeing in production. This was a userspace control module for an FPGA driver connected to some machinery you really don't want to fuck around with, and the bug had wasted something like three staff+ engineer-years by the time I got there.
Recognizing that the bug was impossible in the userspace code if the system worked as intended end-to-end, the engineers started diving into verilog and driver code, trying to find the issue. People were suspecting miscompilations and all kinds of fun things.
Eventually, for unrelated reasons, I decided to clean up the userspace code (deleting and refactoring things unlocks additional deletion and refactoring opportunities, and all said and done I deleted 80% of the project so that I had a better foundation for some features I had to add).
For one of those improvements, my observation was just that if I had to write the driver code to support the concurrency we were abusing I'd be swearing up a storm and trying to find any way I could to solve a simpler problem instead.
Long story short, I still don't know what the driver bug was, but the actual authors must've felt the same way, since when I opted for userspace code with simpler concurrency demands the bug disappeared.
Tying it back to AI and hacking, the white box approach here literally didn't work, and the black box approach easily illuminated that something was probably fucky. Given that AI can de-minify and otherwise spot patterns from fairly limited data, I wouldn't be shocked if black-box hacking were (at least sometimes) more token-efficient than white-box.
This seems to be extremely common. Been a very long time since I looked at Linux kernel stuff, but there were numerous drivers that disabled hardware acceleration or offloading features simply because they became unreliable if they were given heavy loads or deep queues.
Strix was so close to being the hero we deserve. I think these blue torches like strix should offer their services for free to open source ships out at sea. There are 3 wins here, GLOBAL GOOD WILL, testimonial and reviews, and market loyalty reward.
Can any of the AI systems read binary yet? Perhaps generate source code from object file? Is so, that would make access to source redundant for that type of analysis.
AI assisted decompiling has been a thing for a while now, from what I know most people are using assisted tooling for it.
With that said it at least seems possible to be able to be able to read binary itself, but most of the magic there is in execution, so you'd have to have an LLM behave kind of like a processor I think.
I can't believe we still have people out there buying this baby-brain idea of "If muh code is open than people will find vulns!!" This has been disproven for 20+ years catch up.
AI generated bullshit PRs are clearly the bigger issue in the OSS space.
The idea of tying source code to sustenance will soon be history. We will all remember the days when adding some few thousand smart lines of code meant you could gain notoriety and through cheap viral copy expand those traits to wealth and worth. But software has always just been zeros and ones, the value only happens when interpreted.
The future is sharing, you may not believe because your income is tied to being clever. Long term we are all more clever because of the sharing, and your contribution sometimes does not add to your personal success. Asking a company or its individuals to forego their success will not make them add more to our future. But they will add to our future nonetheless, because they all feel like we all do, that adding is what we are all meant to do.
Of course this neglects why mostly free things that were posted on the internet generally won. Take Microsoft for example. All their money makers are licensed, yet at the same time you can download almost every single one for free and install it.
The people that go behind paywalls don't realize how much they'll have to spend on marketing to catch up to those that are open.
And that's only frames the current state, where models are very expensive to train. Once model training is close to the point where a group of individuals can afford it, it's pretty much game over for our current paradigm. The software police will be running around trying to play whack-a-mole on open weight models with people all over the world.
Why would I create content that I don't get paid for and I don't even get credit for? Everyone who creates free content right now is simply doing the work of AI companies to make them more useful for free.
Search engines will cease to exist, so no one will search your content and then click on your link. AI will simply regurgitate your content and take the money for tokens or subscription and not acknowledge you at all.
>There isn't a rule of economics that says better technology makes more, better jobs for horses. It sounds shockingly dumb to even say that
--Humans need not apply.
It's kind of funny that you think you're going to be making money writing software. If you lock up your software who exactly are you selling it to anyway? It's like you're thinking 25% through the situation then going "I can stay where I am and I don't have to change anything", and then crying later when it doesn't work.
What are you going to do, advertise in BYTE magazine (dead). On Instagram? With a sandwich board on a Seattle street corner? What does the software market even look like in the AI age.
And much like how Google and Amazon eat your lunch now whenever they way, successful AI companies will buy up some software ideas and feed them to their models (which will be stolen later by other models). Anyone that sees your software will mock up a useful clone of it pretty quick the first time they see it. And foreign AI companies will just right out steal it.
You're right you won't create content that you don't get paid for, you just won't be creating anything while competing with the other unemployed masses for strawberry picking jobs.
I don't think this will happen. If most content goes behind a paywall, releasing content for free will again become a valuable source of attention. It used to be so before the web got filled with so much free content that it lost any value.
First we blamed AI for layoffs, next we are blaming AI for the AI bait and switch.
It's entirely possible this CEO sincerely believes this, but that means you as a potential customer should stay away: now you know that the CEO of this company has no idea how technology works even at an executive level and/or that he doesn't consult his experts before making decisions.
That's literally not it, a CEO can know how technology work and not apply it for its management, many people do things they "dislike" or don't believe in everyday.
Well, that's what I mean, this guy is using this issue as an scapegoat to close source the software and increase revenues as a result.
The pipeline goes like this:
Use open source license to gain traction and credibility > establish a customer base > pull the rug on open source to get everyone who depends on your product but isn't yet paying to pay.
I agree, it is shortsighted (next quarter syndrome). First of all the AI does not need source to find vulnerabilities and further it breaks the unwritten contract to exchange source for eyeballs which creates better source. I guess the CEO wants less security and stopped evolution of it's code.
There is another product I use that has a freemium model. They hope to monetize a paid tier for users who use the product a lot.
In order to build trust, they open source their product. I forked it, removed the blocks from the freemium feature in 15 minutes using Claude Code. Never published the code to anyone else, just used it myself
Unfortunately, I think it isnāt going to be tenable for systems to be fully open sourced going forward.
I have a large open source project and noticed the number of LLM generate PR is making it unmanageable. Every two weeks, I go in, kill all of them and when someone complains or asks why, I realize it was a real person and then I merge it.
Yes, I "fixed" it by disabling pull requests on the repository. I'm still happy to pull from other people's branches (and do say so in CONTRIBUTING.md)
But... playing devil's advocate, if AI makes it very easy to find exploits without the source code, wouldn't it be doubly effective finding them with the source code as well? And why is the dichotomy posed by this blog post "open source with AI reviews by everyone" vs "closed source but only the bad guys use AI"? What if the scenario was: closed source and the authors/security team use every AI tool at their disposal to find bugs? What do the community's eyeballs add to this equation, assuming (big if) AI review of exploits is such a force multiplier?
Before any knee-jerk reactions: big fan of open source, I'm not arguing this will kill it, I don't have the faintest idea what Cal.com is and I think a world without FOSS would be a tragedy, I run linux and most of my software on my personal PC (other than games) is FOSS.
I decided to not open source my latest project but it has nothing to do with security concerns. My code is perfectly secure and bug-free.
My concern is mostly financial. Most people would be in a better position to monetize my software than I am... Using AI to obfuscate the origin while appropriating all the key innovations. I wouldn't get any credit.
Also, I'm not really interested in humans anymore. I have human fatigue.
I mean, bold statement but statistically speaking it's almost certainly incorrect. I will say that, irrespective of whether source is open or closed, I would be deeply skeptical of a project that made this assertion.
Which works if you assume that AI can find 100% of your bugs.
It can't. So this is a complete waste of your time and will hide actual bugs behind a layer of confidence _and_ obscurity.
You're going to actually have to sit down and figure out how to provide real security in your product while earning profits. This is called "work." I understand Silicon Valley would like to earn money and not work. I am eager for these people to get their comeuppance.
Open Source as such will never "die", but we only need to look at what happened in, say, the last 5 or 10 years. Private entities with a commercial interest, have been flexing their muscles. Microsoft - also known as Microslop these days - with Github is probably the most famous example still, but you can see other examples. One that annoys me personally is Shopify's recent influence - rubygems.org is basically just shopifygems.org now. See: https://blog.rubygems.org/2026/04/15/rubygems-org-has-a-publ...
"Contributors from both the RubyGems client team and Shopify are already working with us on making native gems a better experience for the Ruby community. "
There is a lot more I could add to this (see my complaint about how rubygems.org hijacks gems past the 100.000 download barrier; this was why I retired from using rubygems.org, and then the year afterwards ruby core purged numerous developers. The handwriting is soooooo clear that shopify flexed their muscles here).
I think we need to make open source development more accessible to everyone, not just corporations throwing their money to gain influence and leverage. I don't have a great idea to make this model work; economic incentives kind of have to be there too, I get that part, and I am not sure which models could work. But right now we really have a big problem. We can also see this with age sniffing (age verification - see the article that pointed at Meta at orchestrating influence and lobbyism) and many more changes. Something has to change. Hopefully some people cleverer than me can come up with models that are actually sustainable, even if it may not necessarily be a "fund an open source developer for a year". There could be a more wide-spread "achieve xyz" or some other lower finance effort - but again, I don't have a good suggestion here. Hopfully something improves here though, because I am getting really tired of private interests constantly sabotaging and ruining the whole ecosystem while claiming they do "improve" an ecosystem. We have the old "War is peace. Freedom is slavery. Ignorance is strength." going again. Opposite day, every day.
Open source is dead, AI-pundits are applying the wrong lessons. No one has to accept AI or play the game all these AI companies donāt work if everyone stops publishing. Let the AI generated content industry have the publish space, they're very adamant about taking it over and watering it down with slop.
I wrote some very nice expressive text for our deployment guide. My project manager took the guide and had Gemini break it down into plain boring bullet points. AI and the pundits can gf themselves in their journey to kill human expression.
Here is what I wrote in the guide:
"Post Deploy Responsibility
If you made it this far, say āWow I really did it and it was so easy!ā
Did you say it? Good. Now you are entirely responsible for any issues or bugs that may arise from the newly deployed code. Donāt go anywhere until the deploy has finished (usually takes a few minutes). While an issue or bug may not leave you directly at fault, you are responsible for coordinating any rollbacks or remediations that may be needed until the next deploy."
Here is what the product manager slopped it into:
"- Post deploy responsibility
- You are responsible for performing QA upon deployment
- You are responsible for any issues or bugs that may arise from newly deployed code
- You are responsible for coordinating any rollbacks or remediations that may be needed until the next deploy"
My paragraph wasn't long, hard to understand, or poorly written. I wouldn't have objected to a rewording or some changes but the project manager chose to just copy paste it into Gemini and copy and paste it back. So my take is that they didn't understand what I wrote. Which is a few sentences long and frankly sad if a paragraph is too intense for you to read. When my project manager did this during the meeting I said, "RIP human expression" and their response was a very hasty "no that's not what's happening". This is what all the pundits want to do to everyone and society. Don't believe them that "it's just a tool", that is just a tactic to get you to rollover so they can shove more AI in your face.
And your paragraph had a much bigger impact on the reader. Your paragraph reads like an experienced senior developer teaching you to not screw things up, while the AI generated bullet points sound like generic ToS that everyone ignores.
Enshittification has come for VC backed open-source. As someone on Twittter said, open source has deemed commercial open source obsolete especially when users can point Calude Code to calcom on GitHub and ask it to make them scheduling features directly into their product. Thatās what spooked Cal.
Donāt get me wrong but if virtually all modern software infrastructure lives on top of open source and theyāre mostly fine then Iād imagine that you can make a scheduling webapp secure independent to if itās OSS or not.
Itās OK if thereās another reason for this transition, just be transparent about it and donāt treat your users as children.
They donāt owe you a complete list of reasons why theyāre close sourcing their software. They are not a publicly traded company and no one (customers) actually cares if the product is open source or not.
I've always used and advocated for Cal.com because it's open source. I understand you need to make money and this is no longer the GTM, but don't lie about it.
I have an open source project and started receiving a lot of security vulnerability reports in the last few months. A lot of them are extremely corner cases, but there were some legit ones. They're all fixed now. Closed source software won't receive any reports, but it will be exploited with AI. So I definitely agree with the message of this article.
> Closed source software won't receive any reports, but it will be exploited with AI.
What makes you so sure that closed-source companies won't run those same AI scanners on their own code?
It's closed to the public, it's not closed to them!
More eyes, more chances that someone will actually use the tools. Also, the tools and how you use them are not all the same.
Came here to say the same. Same tools + private. In security two different defense-mechanisms are always better than one.
> Closed source software won't receive any reports
Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates.
Closed source companies can (and should!) also run their own security audits rather than passively waiting for volunteers to spend their tokens on it.
Those bug bounty programs now have to compete against the market for 0-days. I suppose they always did, but it seems the economics have changed in the favour of the bad actors - at least from my uninformed standpoint.
That still exists in the OSS world too, having your code out there is no panacea. I think we'll see a real swarm of security issues across the board, but I would expect the OSS world to fare better (perhaps after a painful period).
Of course everyone should do their own due diligence, but my point is mostly that open source will have many more eyes and more effort put into it, both by owners, but also community.
But also tools that might not be nice and report security vulnerabilities, but exploit them.
There is no guarantee that open means that they will be discovered.
+1, at this point all companies need to be continuously testing their whole stack. The dumb scanners are now a thing of the past, the second your site goes live it will get slammed by the latest AI hackers
Iāve recently set up nightly automated pentest for my open-source project. Iām considering starting to publish these reports as proof of security posture.
If the cost of security audit becomes marginal, it would seem reasonable to expect projects to publish results of such audits frequently.
Thereās probably a quite hefty backlog of medium- and low-severity issues in existing projects for maintainers to suffer through first though.
What do you use for the pentests? any oss libraries?
This is a sandbox escape pentest so the only tooling needed is Claude Code and a simple prompt that asks it to follow a workflow: https://github.com/airutorg/airut/blob/main/workflows/sandbo...
I don't follow. It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders. In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits, all whilst at least blocking the easiest method of finding zero-days - that is, being open source.
This really just seems like Strix marketing. Which is totally fair, but let's be reasonable here, any open-source business stands to lose way more by continuing to be open-source vs. relying on the benevolence of people scanning their code for them.
> It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders.
Actually the opposite is obvious - the comment you replied too talked about an abundance of good Samaritan reports - it's strange to speculate on some nebulous "gain" when responding to facts about more then enough reports concerning open source code.
> In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits
That's one good Samaritan for a closed source app vs many for an open source one. Open source wins again.
> any open-source business stands to lose way more
That doesn't make any sense - why would it lose more when it has many more good Samaritans working for it for free?
You seem to forget that the number of vulnerabilities in a certain app is finite, an open source app will reach a secure status much faster than a closed source one, in addition to also gaining from shorter time to market.
In fact, open source will soon be much better and more capable due to new and developing technological and organizational advancements which are next to impossible to happen under a closed source regime.
Some users might be tech sensitive and have the capacity to check the codebase If a company want to use your platform, it can run an audit with its own staff These are people really concerned about the code, not "good samaritans"
A new user is much more likely to scan the codebase and report vulnerabilities so they can be fixed than illegally exploit them since most people aren't criminals
Exactly. Who even hacks stuff? Most people will report the issue to earn xp and level up than actually exploit it.
Isnāt that security by obscurity?
> Closed source software won't receive any reports, but it will be exploited with AI
How so? AI won't have access to the source code. In some cases AI may have access to deployed binaries (if your business deploys binaries) but I am not aware that it has the same capabilities against compiled code than source code.
But in a SAAS world, all AI has access to is your API. It might be still be up to no good but surely you will be several orders of magnitude less exposed than with access to source code.
Claude is already shockingly good at reverse engineering. Try it ā it's really a step change. It has infinite patience which was always the limited resource in decompiling/deobfuscating most software.
i agree with his too,
but with cal.com i dont think this is about security lol
open source will always be an advantage just you need to decide wether it aligns with you business needs
given what the clankers can do unassisted and what more they can do when you give them ghidra, no software is 'closed source' anymore
Which models have you had good luck with when working with ghidra?
I analyze crash dumps for a Windows application. I haven't had much luck using Claude, OpenAI, or Google models when working with WinDbg. None of the models are very good at assembly and don't seem to be able to remember the details of different calling conventions or even how some of the registers are typically used. They are all pretty good at helping me navigate WinDbg though.
Guess that kind of depends on your definition of "source", I personally wouldn't really agree with you here.
absolutely agree with you if we're talking about clean room reverse engineering; but in the context of finding vulnerabilities it's a completely different story
Assembly is still source code so really it comes down to if the copy protection is obscuring the executable code to the point where the LLM is not able to retrieve it on its own. And if it can't someone motivated could give it the extra help it needs to start tracing how outside inputs gets handled by the application.
Yes exactly! I'm so glad I took this route with my startup. We can't bury our heads in the sand and think the vulnerabilities don't exist just because we don't know about them.
This might be the most painfully obvious advertisement Iāve ever seen on a forum.
I didn't mean it as such, but I can see why it would seem so. I've edited the link out now. Thanks for the feedback.
> The reasoning provided by their CEO, Bailey Pumfleet, is that AI has automated vulnerability discovery at scale,
That sounds like an excuse. The real reason is probably that it's hard to make a viable business out of developing open source.
AI makes a great scapegoat. Need to lay off people? "AI." Need to switch to closed source? "AI."
Exactly. I respect their decision to go closed source if that's what they need to do to make it a viable business, but just be honest about it. Don't make up some excuse around security and open source.
I don't know if I fully agree with this -- how many people were actually self-hosting cal infra? I def could be wrong though
You should be honest about your own personal financial incentive in making these posts.
separating codebase and leaving 'cal.diy' for hobbyists is pretty much the classic open-core path. the community phase is over and they need to protect their enterprise revenue.
blaming AI scanners is just really convenient PR cover for a normal license change.
Itās also now ridiculously easy to simply cherry pick from open source without actually āusingā it.
āI need to do foo in my app. Libraries bar and baz do these bits well. Pick the best from each and letās implement them hereā
Iād not be surprised if npmjs.com and its ilk turn into more a reference site than a package manager backend soon.
I literally have a Claude Code skill called "/delib" that takes takes in any nodejs project/library and converts it to a dependency-less project only using the standard library.
It started as a what-if joke, but it's turned out to be amazing. So yeah, npmjs.com is just reference site for me now, and node_modules stays tiny.
And the output is honestly superior. I end up with smaller projects, clean code, and a huge suite of property-based tests from the refactor process. And it's fully automatic.
It's that easy yes, and someday, we will literally be able to prompt "Redo the Linux kernel entirely in Zig" and it will practically make a 1:1 copy.
Ironically, given the recent supply chain attacks, that may be also more secure.
I'd think it's also much easier to spin up a (in some area) slightly better clone and eat into their revenue.
This is part of it for sure. It is also true that many open source business depended on it not being worth the trouble to figure out the hosting setup, ops etc, and the code. Typical open source businesses also make a practice of running a few features back on the public repo.
Now I can take an open source repo and just add the missing features, fix the bugs, deploy in a few hours. The value of integration and bug-fixing when the code is available is now a single capable dev for a few hours, instead of an internal team. The calculus is completely different.
I mean, it's hard to make a viable business regardless of if the tech is OSS or not, but it's often seen as more challenging this way.
Brilliant piece of content marketing:
1) Pulls you in with a catchy title, that at first glance seems like a dunk on Cal.com (whatever that is).
2) Takes the "we understand your pain" approach to empathize w/ Cal.com, so you feel like you're on the good vibes side.
3) Provides a genuine response to the actual problem Cal.com is dealing with. Something you can't dismiss out of hand.
4) But in the end of the day, the response aligns perfectly with the product they're promoting (a click away to the homepage!)
This mix of genuine ideas and marketing is quite potent. Not saying this is all bad or anything, just found it a bit funny. The mixed-up-ness is the point!
I'm sad to see this article being so upvoted while being kind of empty.
The real content could fit in a comment.
Is it good marketing though? I mean personally I do not use AI, and I don't think this opinion of mine will change. I can't look into the future, but right now I don't use nor do I depend on AI. I guess it may work for some people, but even then I am unsure whether that is really good marketing. Riding on a hype train (which AI right now still is) is indeed easier, so that has to be considered.
They are in HN front page, therefore itās good marketing.
> Security testing has to become an automated, integral part of the CI/CD pipeline. When a developer opens a pull request, an AI agent should immediately attempt to exploit it. When infrastructure changes, an AI should autonomously validate the new attack surface. You do not beat automated attackers by turning off the lights; you beat them by running better automation on the inside.
This feels like the core of the article, but it doesnāt prove the need for open source.
I wonder whether cal actually has concerns about security (in which case, they're wrong, this argument was false when people made it decades ago), or whether they just took a convenient excuse to do something they wanted to do anyway because Open Source SaaS businesses are hard.
Great PR piece by Strix, but I find mixed messages.
Cal.com folks are getting a red team for free, wouldn't that further convince them their closed source software is strong enough?
Isn't Strix's business companies paying for scans regardless of whether the software scanned is open source or closed?
Pretty overreaching claim about another company's internal decisions and open source in general. There is a lot of incentive to stop open source these days.
One of which I am experiencing right now is somebody just copying my repo, not crediting me, didn't even try to change the README. It's pretty discouraging.
The other is security reasons, the premise that volunteers will report vulnerabilities really matter if you are big enough for small portion of people to dedicate themselves, for the most part people take open source tool use it and then forget about it, they only want stuff fixed.
Lastly, open source development kinda sucks so far. I'v been working on a few different tools and the amount of trolling and just bad faith actors I had to deal with is exhausting. On top of that there is a constant stream of people just demanding stuff to be fixed quickly.
I'm hopeful the article is right about its prediction, although I'm under the impression the attacker/defender dynamic is asymmetric and the defender on the loosing end. I hope someone can proof me wrong though...
Making the assumption that the same amount of money needed to attack a critical vulnerability is also required to find and fix it.
Lets say we have a project with 100 modules, and it costs us $100 000 to check these modules for vulnerabilities. What is stopping an attacker from spending the same amount of money to scan, lets say 10 modules but this time with 10x the number of tokens per module than the defender had when hardening the software?
Isnāt the real danger now not the ability to find security vulnerabilities, but rather, the ability of anyone to ask an LLM agent to rewrite your open source project in another language and thus work around whatever license your project has?
You can do the same for closed source projects.
There are real limitations of course.
This is happening quite a lot actually. People just feed an existing project into their agent harness and have it regenerate more or less the same with a few tweaks and then they publish it.
I'm not sure how this works in the legal sense. A human could ostensibly study an existing project and then rewrite it from scratch. The original work's license shouldn't apply as long as code wasn't copy & pasted, right?
What happens when an automated tool does the same? It's basically just a complicated copy & paste job.
A lot of open source projects already have licenses that allow forking and selling the fork, it hasn't been a problem most of the time... there's a lot more to operating open source as a business beyond just shipping the code
So Cal.com favors security through obscurity.
Open Source was always open to "many eyes" in theory exposing itself to zero-day vulnerabilities. But the "many eyes" go for the good and the bad actors.
As far as I am concerned... Way to go Cal.com, and a good reminder to never use your services.
feels like people are arguing the wrong axis tbh
- itās not open vs closed anymore, itās more like bug finding going a few devs poking around to basically infinite parallel scanners
- so now you donāt get a couple of thoughtful reports, you get a many edge cases and half-real junk. fixing capacity didnāt change though
- closing the repo doesnāt really save you, it just switches from white-box to black-box⦠and thatās getting pretty damn good anyway
real problem is: vuln discovery scaled, patching didnāt. now everything is a backlog game
Closing your source doesn't close your attack surface,it just closes the community that would have helped you defend it. Security through obscurity is a kind of tradeoff, not a strategy.. i mean that's what I feel.
At the same time, I heavily support open-source and contribute a lot, but I can't necessarily agree that security-through-obfuscation doesn't play a major role in slowing down attacks. Cloudflare have based its whole security being closed-source (for example on its anti-bot mechanism) to be hard to reverse engineer, and they remain leaders as of today with few serious security breaches.
Some things just can't be truly secure as well, ddos protection is mostly a guessing/preventive game, exposing your firewall config/scripts will make you more vulnerable than NOT.
If your codebase isn't exposed, attackers are constrained by the network and other external restrictions which greatly reduce the number of possible trials, even with a swarm of residential proxies, it's not the same at all from inspecting a codebase in depth with thousand of agents and all models.
It's a good question - is blackbox hacking as effective as whitebox hacking, for AI agents? I've gotta assume someone at Anthropic is putting together an eval as we speak.
I don't really know, but I have a story which might prompt some conversation about it.
At $WORK we had a system which, if you traced its logic, could not possibly experience the bug we were seeing in production. This was a userspace control module for an FPGA driver connected to some machinery you really don't want to fuck around with, and the bug had wasted something like three staff+ engineer-years by the time I got there.
Recognizing that the bug was impossible in the userspace code if the system worked as intended end-to-end, the engineers started diving into verilog and driver code, trying to find the issue. People were suspecting miscompilations and all kinds of fun things.
Eventually, for unrelated reasons, I decided to clean up the userspace code (deleting and refactoring things unlocks additional deletion and refactoring opportunities, and all said and done I deleted 80% of the project so that I had a better foundation for some features I had to add).
For one of those improvements, my observation was just that if I had to write the driver code to support the concurrency we were abusing I'd be swearing up a storm and trying to find any way I could to solve a simpler problem instead.
Long story short, I still don't know what the driver bug was, but the actual authors must've felt the same way, since when I opted for userspace code with simpler concurrency demands the bug disappeared.
Tying it back to AI and hacking, the white box approach here literally didn't work, and the black box approach easily illuminated that something was probably fucky. Given that AI can de-minify and otherwise spot patterns from fairly limited data, I wouldn't be shocked if black-box hacking were (at least sometimes) more token-efficient than white-box.
>simpler concurrency demands
This seems to be extremely common. Been a very long time since I looked at Linux kernel stuff, but there were numerous drivers that disabled hardware acceleration or offloading features simply because they became unreliable if they were given heavy loads or deep queues.
Strix was so close to being the hero we deserve. I think these blue torches like strix should offer their services for free to open source ships out at sea. There are 3 wins here, GLOBAL GOOD WILL, testimonial and reviews, and market loyalty reward.
Is there any recent research on whether open or closed-source projects are more secure? I am genuinely curious if anyone has studied the question.
Can any of the AI systems read binary yet? Perhaps generate source code from object file? Is so, that would make access to source redundant for that type of analysis.
AI assisted decompiling has been a thing for a while now, from what I know most people are using assisted tooling for it.
With that said it at least seems possible to be able to be able to read binary itself, but most of the magic there is in execution, so you'd have to have an LLM behave kind of like a processor I think.
Yes, the current meta for ctfs, which includes challenges for exploiting binaries, is to just throw an LLM at it.
Seems like flimsy reasoning from the Cal.com CEO. How should we think about Strix vs. foundational model releases like Mythos?
How long before LLM perform perfect disassembly exploitation...
https://x.com/steipete/status/2044423791405924562 very soon it seems...
Related:
Cal.com is going closed source
https://news.ycombinator.com/item?id=47780456
I can't believe we still have people out there buying this baby-brain idea of "If muh code is open than people will find vulns!!" This has been disproven for 20+ years catch up.
AI generated bullshit PRs are clearly the bigger issue in the OSS space.
The idea of tying source code to sustenance will soon be history. We will all remember the days when adding some few thousand smart lines of code meant you could gain notoriety and through cheap viral copy expand those traits to wealth and worth. But software has always just been zeros and ones, the value only happens when interpreted.
The future is sharing, you may not believe because your income is tied to being clever. Long term we are all more clever because of the sharing, and your contribution sometimes does not add to your personal success. Asking a company or its individuals to forego their success will not make them add more to our future. But they will add to our future nonetheless, because they all feel like we all do, that adding is what we are all meant to do.
It's just an excuse. Classic open source rug pull here.
a lot of the vulnerabilities in web-apps are people trying to be too smart for their own good.
use battle-tested frameworks such as Rails, Django then you won't make rookie security mistakes.
Except that Django got so many criticals we can't even list them on a thread here, but yeah, using known and ancient frameworks is generally smart.
All content is going to go behind paywalls.
There is zero incentive or reason for content creators to let AI slurp their content for free and distribute it and get all the money from it.
Everything new will be licensed and if AI companies want access to it, they will need to pay for it, just like we will.
Will it help? AI authors will just then buy those subscriptions and in the big picture it won't cost that much.
Of course this neglects why mostly free things that were posted on the internet generally won. Take Microsoft for example. All their money makers are licensed, yet at the same time you can download almost every single one for free and install it.
The people that go behind paywalls don't realize how much they'll have to spend on marketing to catch up to those that are open.
And that's only frames the current state, where models are very expensive to train. Once model training is close to the point where a group of individuals can afford it, it's pretty much game over for our current paradigm. The software police will be running around trying to play whack-a-mole on open weight models with people all over the world.
Why would I create content that I don't get paid for and I don't even get credit for? Everyone who creates free content right now is simply doing the work of AI companies to make them more useful for free.
Search engines will cease to exist, so no one will search your content and then click on your link. AI will simply regurgitate your content and take the money for tokens or subscription and not acknowledge you at all.
>There isn't a rule of economics that says better technology makes more, better jobs for horses. It sounds shockingly dumb to even say that
--Humans need not apply.
It's kind of funny that you think you're going to be making money writing software. If you lock up your software who exactly are you selling it to anyway? It's like you're thinking 25% through the situation then going "I can stay where I am and I don't have to change anything", and then crying later when it doesn't work.
What are you going to do, advertise in BYTE magazine (dead). On Instagram? With a sandwich board on a Seattle street corner? What does the software market even look like in the AI age.
And much like how Google and Amazon eat your lunch now whenever they way, successful AI companies will buy up some software ideas and feed them to their models (which will be stolen later by other models). Anyone that sees your software will mock up a useful clone of it pretty quick the first time they see it. And foreign AI companies will just right out steal it.
You're right you won't create content that you don't get paid for, you just won't be creating anything while competing with the other unemployed masses for strawberry picking jobs.
I don't think this will happen. If most content goes behind a paywall, releasing content for free will again become a valuable source of attention. It used to be so before the web got filled with so much free content that it lost any value.
I disagree. AI will slurp their content so quickly that no one will notice.
First we blamed AI for layoffs, next we are blaming AI for the AI bait and switch.
It's entirely possible this CEO sincerely believes this, but that means you as a potential customer should stay away: now you know that the CEO of this company has no idea how technology works even at an executive level and/or that he doesn't consult his experts before making decisions.
That's literally not it, a CEO can know how technology work and not apply it for its management, many people do things they "dislike" or don't believe in everyday.
Well, that's what I mean, this guy is using this issue as an scapegoat to close source the software and increase revenues as a result.
The pipeline goes like this:
Use open source license to gain traction and credibility > establish a customer base > pull the rug on open source to get everyone who depends on your product but isn't yet paying to pay.
This is just an excuse to close source their project while blaming AI. Spineless bullshit excuse instead of owning your choices.
Shame
I agree, it is shortsighted (next quarter syndrome). First of all the AI does not need source to find vulnerabilities and further it breaks the unwritten contract to exchange source for eyeballs which creates better source. I guess the CEO wants less security and stopped evolution of it's code.
It's like the layoffs. Let's blame this thing we wanted to do for a while on AI.
There is another product I use that has a freemium model. They hope to monetize a paid tier for users who use the product a lot.
In order to build trust, they open source their product. I forked it, removed the blocks from the freemium feature in 15 minutes using Claude Code. Never published the code to anyone else, just used it myself
Unfortunately, I think it isnāt going to be tenable for systems to be fully open sourced going forward.
I have a large open source project and noticed the number of LLM generate PR is making it unmanageable. Every two weeks, I go in, kill all of them and when someone complains or asks why, I realize it was a real person and then I merge it.
is anyone else seeing this / fixed this problem ?
Yes, I "fixed" it by disabling pull requests on the repository. I'm still happy to pull from other people's branches (and do say so in CONTRIBUTING.md)
> kill all of them and when someone complains or asks why, I realize it was a real person and then I merge it.
I mean an AI skill is perfectly capable of doing this exact same thing.
I'm pro FOSS, militantly so. FSF-style.
But... playing devil's advocate, if AI makes it very easy to find exploits without the source code, wouldn't it be doubly effective finding them with the source code as well? And why is the dichotomy posed by this blog post "open source with AI reviews by everyone" vs "closed source but only the bad guys use AI"? What if the scenario was: closed source and the authors/security team use every AI tool at their disposal to find bugs? What do the community's eyeballs add to this equation, assuming (big if) AI review of exploits is such a force multiplier?
Before any knee-jerk reactions: big fan of open source, I'm not arguing this will kill it, I don't have the faintest idea what Cal.com is and I think a world without FOSS would be a tragedy, I run linux and most of my software on my personal PC (other than games) is FOSS.
I decided to not open source my latest project but it has nothing to do with security concerns. My code is perfectly secure and bug-free.
My concern is mostly financial. Most people would be in a better position to monetize my software than I am... Using AI to obfuscate the origin while appropriating all the key innovations. I wouldn't get any credit.
Also, I'm not really interested in humans anymore. I have human fatigue.
>My concern is mostly financial.
Then AI will eat your lunch anyway if the financial part has anything at all to do with the code.
AI can decompile code very well.
> My code is perfectly secure and bug-free.
I mean, bold statement but statistically speaking it's almost certainly incorrect. I will say that, irrespective of whether source is open or closed, I would be deeply skeptical of a project that made this assertion.
> The real solution: fight fire with fire
Which works if you assume that AI can find 100% of your bugs.
It can't. So this is a complete waste of your time and will hide actual bugs behind a layer of confidence _and_ obscurity.
You're going to actually have to sit down and figure out how to provide real security in your product while earning profits. This is called "work." I understand Silicon Valley would like to earn money and not work. I am eager for these people to get their comeuppance.
"Open Source Isn't Dead."
Well ...
Open Source as such will never "die", but we only need to look at what happened in, say, the last 5 or 10 years. Private entities with a commercial interest, have been flexing their muscles. Microsoft - also known as Microslop these days - with Github is probably the most famous example still, but you can see other examples. One that annoys me personally is Shopify's recent influence - rubygems.org is basically just shopifygems.org now. See: https://blog.rubygems.org/2026/04/15/rubygems-org-has-a-publ...
"Contributors from both the RubyGems client team and Shopify are already working with us on making native gems a better experience for the Ruby community. "
There is a lot more I could add to this (see my complaint about how rubygems.org hijacks gems past the 100.000 download barrier; this was why I retired from using rubygems.org, and then the year afterwards ruby core purged numerous developers. The handwriting is soooooo clear that shopify flexed their muscles here).
I think we need to make open source development more accessible to everyone, not just corporations throwing their money to gain influence and leverage. I don't have a great idea to make this model work; economic incentives kind of have to be there too, I get that part, and I am not sure which models could work. But right now we really have a big problem. We can also see this with age sniffing (age verification - see the article that pointed at Meta at orchestrating influence and lobbyism) and many more changes. Something has to change. Hopefully some people cleverer than me can come up with models that are actually sustainable, even if it may not necessarily be a "fund an open source developer for a year". There could be a more wide-spread "achieve xyz" or some other lower finance effort - but again, I don't have a good suggestion here. Hopfully something improves here though, because I am getting really tired of private interests constantly sabotaging and ruining the whole ecosystem while claiming they do "improve" an ecosystem. We have the old "War is peace. Freedom is slavery. Ignorance is strength." going again. Opposite day, every day.
There are no answers, only compromises.
Corporations are about money.
Individuals need to eat.
Governments love to concentrate power.
Open source is dead, AI-pundits are applying the wrong lessons. No one has to accept AI or play the game all these AI companies donāt work if everyone stops publishing. Let the AI generated content industry have the publish space, they're very adamant about taking it over and watering it down with slop.
I wrote some very nice expressive text for our deployment guide. My project manager took the guide and had Gemini break it down into plain boring bullet points. AI and the pundits can gf themselves in their journey to kill human expression.
Here is what I wrote in the guide:
"Post Deploy Responsibility
If you made it this far, say āWow I really did it and it was so easy!ā
Did you say it? Good. Now you are entirely responsible for any issues or bugs that may arise from the newly deployed code. Donāt go anywhere until the deploy has finished (usually takes a few minutes). While an issue or bug may not leave you directly at fault, you are responsible for coordinating any rollbacks or remediations that may be needed until the next deploy."
Here is what the product manager slopped it into:
"- Post deploy responsibility
My paragraph wasn't long, hard to understand, or poorly written. I wouldn't have objected to a rewording or some changes but the project manager chose to just copy paste it into Gemini and copy and paste it back. So my take is that they didn't understand what I wrote. Which is a few sentences long and frankly sad if a paragraph is too intense for you to read. When my project manager did this during the meeting I said, "RIP human expression" and their response was a very hasty "no that's not what's happening". This is what all the pundits want to do to everyone and society. Don't believe them that "it's just a tool", that is just a tactic to get you to rollover so they can shove more AI in your face.And your paragraph had a much bigger impact on the reader. Your paragraph reads like an experienced senior developer teaching you to not screw things up, while the AI generated bullet points sound like generic ToS that everyone ignores.
Enshittification has come for VC backed open-source. As someone on Twittter said, open source has deemed commercial open source obsolete especially when users can point Calude Code to calcom on GitHub and ask it to make them scheduling features directly into their product. Thatās what spooked Cal.
cofounder here
going closed source does not mean we are not fighting fire with fire
we are using a handful of internal AI vulnerability scanners for months now
being open source simply reduces risk by 5x to 10x according to several security researchers we are working with https://cal.com/blog/continuous-ai-pentesting-vulnerability-...
Donāt get me wrong but if virtually all modern software infrastructure lives on top of open source and theyāre mostly fine then Iād imagine that you can make a scheduling webapp secure independent to if itās OSS or not.
Itās OK if thereās another reason for this transition, just be transparent about it and donāt treat your users as children.
They donāt owe you a complete list of reasons why theyāre close sourcing their software. They are not a publicly traded company and no one (customers) actually cares if the product is open source or not.
I've always used and advocated for Cal.com because it's open source. I understand you need to make money and this is no longer the GTM, but don't lie about it.