Iâve been tempted to buy one and do âreal dev workâ on it just to show people itâs not this handicapped little machine.
I built multiple iOS apps and went through two start up acquisitions with my M1 MBA as my primary computer, as a developer. And the neo is better than the M1 MBA. I edited my 30-45 min long 4k race videos in FCP on that air just fine.
> I built multiple iOS apps and went through two start up acquisitions with my M1 MBA as my primary computer, as a developer. And the neo is better than the M1 MBA. I edited my 30-45 min long 4k race videos in FCP on that air just fine.
Before I was a professional software developer, I used a scrawny second-hand laptop with a Norwegian keyboard (I'm not Norwegian) because that was what I could afford: https://i.imgur.com/1NRIZrg.jpeg
This was the computer I was developing PHP backends on + jQuery frontends, and where I published a bunch of projects that eventually led to me getting my first software development job, in a startup, and discovering HN pretty much my first day on the job :)
The actual hardware you use seems to me like it matters the least, when it comes to actually being able to do things.
I still manage and develop my php/jquery saas product on a 2011 27" iMac running Linux Mint, with an SSD being the only upgrade. Runs better than most new windows machines. No complaints.
I switch between Thinkpad T420s and PineBook Pro for all the hobby work.
T420s has loose USB ports and the power socket is almost falling off, so I plan to replace it by a 5 years old T14 G2 in the coming months.
I can afford the latest MacBook, but I'd rather not generate more e-waste that there is, and more importantly I feel closer to my users, and my code is efficient and straight to the point.
My non-hobby laptop is an old cheap Dell from 5-6 years ago.
The best laptop I ever had was a maxed-out Thinkpad P7x, and it came with the most meaningless job ever.
I can only compare that job to the one at a unicorn that gave me the latest and greatest MacBook. Not only the job was meaningless, the whole industry made no sense to me.
I wrote 99% of a large PHP app on an six year old laptop with a single 17" LCD. Meanwhile, at my desk, I had a Dell workstation with 3 monitors at the time, but it was easier to squirrel away in a corner somewhere, undisturbed.
After all, the actual server ran the code, I just needed text editors, terminal windows, and web browsers.
I started my business back in 2006 with an ancient 306 laptop - it was practically free, it ran VIM just fine, and that was all I needed it to do to crank out PHP until the cows came home.
Your hardware matters quite a bit if you're doing lower level things and the architecture is not the same as you're developing for. But apparently HN is all web devs
I just spent vacation deciding not to bring a laptop, but to use my android phone (a galaxy s22) with a hdmi adapter and Bluetooth travel keyboard. Plugged it in to the TV in our accomodation and had a lot of fun.
Running neovim on termux was fine. Developing elixir was no problem, the test suite took 5s on my phone, and takes 1s on my laptop. Rust and cargo compiling was slow enough that I didn't really enjoy it though.
Meant that I could just pack up instantly and have an agent do review workflows while I was out and about as well in my pocket, and didn't really notice a big battery hit.
I'm glad enough people got M1 MacBook Airs now that the broader sentiment within the commentariat is changing and people are pushing back on the dismissals.
8gb has ALWAYS been fine in Apple Silicon Mac OS. RAM usage on a fresh boot is a meaningless statistic (unused RAM is wasted RAM). And they're just plain capable!
It's starting to show its age, but I've been using a 2019 MacBook Pro with the Intel chip and 16GB of memory. Still handles multiple terminal sessions with Claude Code and Codex simultaneously, building in Xcode, running Docker in the background, etc.
(Maybe the fans sometimes sound like they're a jet engine taking offâŚ)
Finally just put an order in for a new 16" MBP M5 Max with 48GB memory only because it looks like they're going to stop supporting the Intel stuff this year and no more software updates. It'll probably be obsolete in six months with the rate things are going, but I've been averaging seven years between upgrades so it should be good!
Oh my. All I have to say is cherish the first week of your M* experience. :D When I got rid of my intel MBP (it was an i7) for my MBA it was astonishing how fast and smooth it was.
I agree. It was utterly ridiculous how noticeable the improvement was. I was doing z3 solving for ICFP contest the first couple weeks after getting the m1 air. And it was consistently smoking my teammates maxed out i7 MBP
Sort of, they have no "hands", LLMs can only respond that they want to execute a tool/command. So they do that a lot to: read files, search for things, compile projects, run tests, run other arbitrary commands, fetch stuff from the internet etc.
Obviously the LLM inference is super heavy, but the actual work / task at hand is being executed on the device.
I use a 2015 MacBook Pro all the time--like right now. It does have 16GB of memory. It's what sits on my dining room table where I do most of my writing/browsing and which I take for travel. I do have an Apple Silicon MacBook Pro in my office but my downstairs "office" is a lot lighter and airier.
I use a 2015 MacBook Pro all the time--like right now.
I have a 2010 MacBook Air that I still use when traveling.
The battery is completely shot, but it works fine when plugged in. And if I'm on the road, I don't use my computer until I get to the hotel anyway. And even then, it's just fine for e-mail, browsing, and even Photoshop.
I have one of these (it's my only Mac), but it only has 2GB of RAM, so it's kinda rough. I tried Mint on it, but IIRC it might not have the GPU drivers? I just bought it a new SSD which helped a bit.
I think this one had a battery replacement because it was bulging. But it's definitely in the class of devices that, if it gets swiped or lost, is basically in the <ehh> category as opposed to my newer one.
Am probably giving newish iPad and magnetic keyboard a spin on my next trip mostly to see how it goes.
Former employee of mine had the 2019 MBP as well. After a few years he had the same problem with the fans -- if you haven't already, pop it open and clean the fans and vents. You'll probably need a little brush along with compressed air. Lots of stuff comes up on Google. Great machine btw. Good luck!
I was using a M1 Mac Mini and only 8GB of RAM on it to build iOS apps for maybe a year. It's absolutely doable, though it very noticeably gets a little less snappy when building projects. When building in Xcode and then switching to Firefox to browse for instance, I could tell it took slightly longer to switch tabs and YouTube playback would occasionally stutter if too much was happening.
I also was using an Intel MacBook Pro with 16GB at the time. Doing the same thing there was much smoother and snappier. On the whole, it actually made me want to just the laptop instead since it "felt" nicer. (This isn't measuring build times or anything like that, just snappiness of the OS.)
People usually forget 8GB isn't 8GB. Memory compression means you can store ~2x (lz4) to 3x (zstd) as much data in memory as ordinarily. And in the worst case, reading swap from disk (writes don't matter as they can be predicted) is so much faster with NVMe SSDs.
The worst corner they cut is no keyboard backlighting. That saves them what, $1 BoM per MacBook Neo? Especially because now they have to put up an entire new keyboard production line instead of just piggybacking off of the Air keyboard production line.
I just retired my m1 air to being a server this month. Theyâre very capable laptops. If the neo is even comparable in spec itâs excellent for the price
My m1 air with 1TB ssd and 16GB of ram is a little champion, I use it during travel to play indie games like Hades II or Slay the Spire, and it works really well, better than my Steam Deck which broke. The only issue it really has is when I try to plug it into my docking station it struggles mightily with 2 2K screens and a 4K screen, so I just use my desktop in that case.
I am jealous of my wifeâs 13â M5 iPad Pro though, that oled screen is gorgeous, a wonder of modern engineering.
I setup a self hosted runner and then use that in my CI workflows. Then I disabled it from sleeping so it can clamshell forever and now it sits here in my living room silently workin' https://imgur.com/a/EaBICdo
And, presumably for a combination of the Mac build (and hardware) being of niche interest and sitting outside the standard Linux workflows so it's annoying to administer. And serving a money-making audience (iOS app devs) who have a revenue stream and see the extra CI cost as worth it.
I have an older 8GB MacBook Air. This is false. I routinely have Slack, Chrome, iTerm, Visual Studio Code, and more open on it. Itâs fine.
Those apps donât need every single byte of memory you see in Activity Monitor to be active in RAM all of the time. The OS swaps out unused parts to the very fast SSD. If you push it so far that active pages are constantly being swapped out as apps compete then you start to notice, but the threshold for that is a lot higher than HN comments seem to think.
It really isn't. It is a capable machine but modern software has made it a lemon. And that is the only reason apple sells it. So that whoever buys it needs to buy another one prematurely, generating another sale.
Everything from apple to modern software is rotten to its core.
âŚin reply to someone who just said their experience is fine, and included details. If you just want to rant about Apple, have at it, but youâre going to have to do better than ânuh, uhâ if you want to be convincing.
Well I could say that it isn't enough for vscode alone. And I'd be right. It all depends on how and what you use vscode for.
8GB really shouldn't be an option in 2026, it is just shortsighted and an insanely uneven build.
I could rant about Dell too. Or most other manufacturers (surprise, greed isn't apple exclusive). But Apple at least tries to keep the appearance of a higher profile.
Well I could say that it isn't enough for vscode alone. And I'd be right. It all depends on how and what you use vscode for.
Fair enough; though experience says 8Gb will run VScode, it would very much depend on the use case, I agree. OTOH, I would argue that anyone working VScode that hard probably isnât buying 8Gb machines, but OP did say theyâre running it so itâs up for discussion.
Iâm sick to death of this. Itâs so devoid from reality in 2026 that I see it as a lowest common denominator populist political catchphrase more than any legitimate contributor to any conversation. My min spec MacBook Pro from 6 years ago doesnât flinch at this, and it barely flinches at a whole lot more.
Can we please just move on? Maybe get your hardware checked if youâre legitimately still having these issues.
I've been finding it hard to wean myself off the standalone app but another major reason to do so is opening threads in separate tabs. I find as soon as I'm involved in two or more conversations on there it's super easy to start losing track of things.
I am talking from experience with an M1 & 8GB RAM. I had to restart either the browser or the YouTube browser processes at least once every couple days to stop the whole system from lagging.
I could have two browser windows open in the late 1990s. I have about a thousand times as much RAM now. So even with 10x more bloat in the pages, I should be able to open 200 tabs just fine.
I wrote a fix for node that got upstreamed a few years ago on a Lenovo Thinkpad 3 Chromebook. I'm actually commenting from it now. It's not a workhorse by any means, but for $99, it's not bad. A 1.1GHz Celeron processor with 4GB of memory is able to compile projects like node, python, Erlang, etc. without much hassle. It just takes a lunch break :)
Any modern Mac is more than capable. I had the baseline M1 Macbook Air that I did work on as well, just to see how that fared. Much better than this machine - 10x the price, but more than 10x the performance. This one is great as a "I don't mind if I break it or lose it" device.
I was doing Android development and Verilog synthesis on a mobile Nehalem i5 in 2020. That machine is still totally adequate for anything a "normal person" does with their computer, provided they have good tab hygeine. The reality is that (unless you play video games and/or you want local LLM inference) the demands people place on their computers haven't changed significantly in at least 10 years.
Oh that made it seem like I was the driving factor. Maybe for the first one (Percy.io) I can claim a large part of that success (owning the SDKs and support end to end).
The other I just owned the front end infra and was on the growth team. The rest of the folks were the stars on that one.
Edit: I guess I brought that up because I guess I don't know any more "real work" that that, ha. What is 'real work'?
It would have been a better fit for me than the M4 Air, I literally use it only for typing and browsing, plus a could of Mac-only tools. Brilliant machine but complete overkill for me. It's almost tempting to switch just to get rid of the display notch.
I'm still doing iOS dev on my 2020 M1 MPB, and it's fine! I expect that if I change out its battery and apply new thermal paste it would run for another 6 years.
Better in terms of raw specs. The original M1 Air also came with 8GB of RAM, and the A18 Pro in the Neo is faster than the version of the M1 that shipped in the base model Air
The argument is misrepresented - I think it's about frustration and convenience, not achievability.
I developed some work that keeps tens of thousands of people alive every day on a $100 Acer netbook almost 15 years ago. The tools are always there, I don't think anyone thinks the work is actually impossible to do on a limited machine.
most dev workflows from pre 2021 can probably run just fine on a NEO - i think once you get into conductor / 8 terminals with claude code territory thatâs where things start to slow down
i just got an m5 max with 128gb of ram specifically to run local llms
Claude Code still runs things on your local machine. So if you have some pretty expensive transpilation, or resolving dependency trees that needs musl recompilation, or doing something rust, you still need a reasonable ammount of local firepower. More so if you're running multiple instances of them.
Itâs fine to if you donât have any memory hogging apps. But as soon as you fire up a couple demanding Docker containers youâll feel the pain. 8GB isnât so much RAM for some applications.
Why do you think people buying the cheapest MacBook
available will be running Docket? Do you commonly run Docker containers on the cheapest Windows laptop available? Why not?
> just to show people itâs not this handicapped little machine
I used to think this way about Apple and its jarring to read with it 10-15 years behind me.
It reads as aggro and oddly tribalistic / sports fan-y.
(what people? who thinks its slower than an M1? who thinks you can't code on it? what will you coding on it prove to these people that the benchmarks they read can't? with all that, why get so invested you're buying a machine you don't want to use day to day? what does "handicapped" mean in this context?)
Only sharing b/c I never understood why people would roll their eyes at me, and apparently I finally reached my own graybeard moment, and I am now rolling my eyes at both of my selves :)
> Iâve been tempted to buy one and do âreal dev workâ on it just to show people itâs not this handicapped little machine.
But... you can do the same exercise with a $350 windows thing. Everyone knows you can do "real dev work" on it, because "real dev work" isn't a performance case anymore, hasn't been for like a decade now, and anyone who says otherwise is just a snob wanting an excuse to expense a $4k designer fashion accessory.
IMHO the important questions to answer are business side: will this displace sales of $350 windows machines or not, and (critically) will it displace sales of $1.3k Airs?
HN always wants to talk about the technical stuff, but the technical stuff here isn't really interesting. The MacBook Neo is indeed the best laptop you can get for $6-700.
But that's a weird price point in the market right now, as it underperforms the $1k "business laptops" (to avoid cannibalizing Air sales) and sits well above the "value laptop" price range.
No, you can't do real work on a $350 windows machine. No way such a setup is suitable for anything beyond browsing a tab or two and connecting to servers using SSH.
And, the whole shittiness of the experience will even distract you attempting real work: the horrible touchpad, the bad screen, the forced windows updates when you trying to start the machine to do something urgent, ads in Windows, the lack of proper programmability of Windows (unless you use WSL).... Add the fact that the toy is likely to break in a year or two. These issue exist on far more expensive Windows machines, how much more a $350 machine.
Leaving Windows machines and OS behind for more than a decade has been a continuing breath of fresh air. I have several issues with the Apple devices and macOS (as I have with Linux too), but on the whole they are far better than Windows. The only good thing about Windows that I miss on Macs is the file explorer and window management, not sure why Apple stubbornly refuses to copy those.
A lot of $350-ish Windows machines also donât have SSDs but instead eMMC storage, which is dog slow and will make modern SSD-mandatory Windows feel even more awful to use.
If Windows/Linux/x86 is non-negotiable and thatâs your budget, I would never in a million years recommend anything brand new. This is when you go pick up a $350 used midrange ThinkPad on eBay. It wonât outperform a Neo in terms of CPU and battery life but I guarantee itâll be a better experience than the garbage routinely sold at this price point.
Of course you can. You can do real work on an $80 Amazon Fire. Yes, some things will be potentially impossible or frustrating but that's also true of the MacBook Neo, just a bit higher of a bar. A lot of this also depends on your definition of "real work".
$350 USD can get you a decent laptop with a SSD, 16GB RAM and something like an Intel N100 or N95. And they pretty comparable to a decent Intel Skylake CPU which are still pretty usable.
Yes, the Neo has a faster CPU but it also has less RAM and less storage and costs more and has less ports. Besides ray traced games what can the Neo do that the others can't? They'll take longer but they'll get there.
And if you're willing to go used? That $350 goes a lot further.
> Yes, the Neo has a faster CPU but it also has less RAM and less storage and costs more and has less ports.
8GB on Apple Silicon is far better than 16 GB on Wintel, and I don't event trust the quality of 16GB of RAM on a bottom of the barrel Windows machine.
Would you prefer a machine that is still good 7 years from now with less ports, or one with more ports that you have to replace in 2 years? Yes it is more expensive now, but over 7 years it is an absolute bargain.
16 GB physical RAM is just better. Apple isn't magic. Gimme a break. Both devices have SSDs for fast swapping and have RAM compression. You can't spin up a VM that has 8GB RAM on the Neo, you can't load a large spreadsheet or do a decently sized digital painting. I could maybe buy a claim that 8GB is better on Mac than 8GB on Windows.
Why would you have to replace it in 2 years? How do we know Apple will even be offering updates to Neo in 7 years? Will 8GB still be usable in 7 years really? 8GB is barely on the fence already.
I wouldn't be surprised if Apple drops the Neo from software support in less than 7 years.
> No, you can't do real work on a $350 windows machine.
Sigh. I mean, even absent the obvious answers[1], that's just wrong anyway. You're being a snob. Want to run WSL? Run WSL. Want to run vscode natively? Ditto. Put it on a cheap TV and run your graphical layout and 3D modelling work. I mean, obviously it does all that stuff. OBVIOUSLY, because that stuff is all cheap and easy.
All the complaining you're doing is about preference, not capability. You're being a snob. Which is hardly weird, we're all snobs about something.
But snobs aren't going to buy the Neo either. Again, the business question here is whether the $350 junk users can be convinced to be snobs for $600.
[1] "Put Linux on it", "All of your stuff is in the cloud anyway", "It's still a thousand times faster than the machine on which I did my best work", etc...
You mean that machine from 30 years ago that was running 30 year old software that has nothing in common with todayâs development? And how well does Linux run on 4GB?
That's a 16G windows box which will happily run multiple VMs for whatever your deployment environment is, something the Neo is actually going to struggle with. The Jasper Lake CPU is indeed awfully slow, but again for routine "dev" tasks that's just not a limit.
You would obviously refuse out of taste, but if you were actually forced to use this machine to do your job... you absolutely could.
I run a full AI operations stack on an M4 Mac Mini â ClawdBot (Claude), OBS streaming a 24/7 WebGL simulation, Chrome for browser automation, 16 cron jobs, the whole thing. $599 machine.
Reality check: it works remarkably well for AI agent orchestration. The unified memory architecture means the agent, browser, and streaming can coexist without the memory wall you'd hit on x86. But running OBS alongside everything else does make it laggy â I've got an M5 MacBook Air (32GB) incoming and I'm planning to swap the Mini for a 64GB model to give more headroom.
For anyone considering Apple Silicon as an AI dev machine: the sweet spot is 64GB unified memory minimum if you want to run an agent + browser automation + anything else simultaneously. 32GB works but you feel the pressure. The M-series efficiency means you can leave it running 24/7 without worrying about your power bill, which matters when your AI agent literally never sleeps.
Kinda comparing apples to oranges. AWS was using EBS and not local instance storage. So youâre easily looking at another order of magnitude latency when transmitting data over the network versus a local pcie bus. Thatâs gonna be a huge factor in what I assume is a heavy random seek load.
I wrote a longer comment already (https://news.ycombinator.com/item?id=47352526) but looking at the hot run performance and making big hand wavy guesses, the performance difference might not be as big as you'd expect.
But AWS beat the laptop? And there's no cost to performance analysis? Yes AWS is overpriced but how do you make that conclusion from this specific article? Because network disks were slower than SSDs? AWS also has SSD instances with local storage.
I haven't tried the newer I7i and I8g instance types (the newest instances with local storage) for myself, but AWS claims "I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances."
I benchmarked I4i at ~2GB/s read, so let's say I7i gets 3GB/s. The Verge benchmarked the 256GB Neo at 1.7GB/s read, and I'd expect the 512GB SSD to be faster than that.
Of course, an application specific workload will have its own characteristics, but this has to be a win for a $700 device.
It's hard to find a comparable AWS instance, and any general comparison is meaningless because everybody is looking at different aspects of performance and convenience. The cheapest I* is $125/mo on-demand, $55/mo if you pay for three years up front, $30/mo if you can work with spot instances. i8g.large is 468GB NVMe, 16GB, 2 vCPUs (proper cores on graviton instances, Intel/AMD instance headline numbers include hyperthreading).
Yeah, this is really about how ludicrously overpriced big cloud is. Iâve got a first gen M1 Max and it destroys all but the largest cloud instances (that cost its entire current market value per month!), at least in compute. Itâs a laptop! A decent bare metal server in a rack will destroy any laptop.
Itâs staggering. Jaw dropping. Bandwidth is even worse, like 10000X markup.
Yet cloud is how we do things. Thereâs a generation or maybe two now of developers who know nothing but cloud SaaS.
I agree and disagree, the benefit with cloud is you "don't need to manage it", it scales automatically, redundancy, and automatic backups etc. I do think you are right; in the future there will be more infrastructure as code as cost pressures become more obvious.
The tooling â K8S with all its YAML, Terraform, Docker, cloud CLI tools, etc. â is pretty hideously ugly and complicated. I watch people struggle to beat it into shape just like they did with sysadmin automation tools like Puppet and Chef a decade or more ago. We have not removed complexity, only moved it.
The auto scaling thing is a half truth. It can do this if you deploy correctly but the zero downtime promise is only true maybe half the time. It also does this at greatly inflated cost.
Today you can scale with bare metal. Nobody except huge companies physically racks anymore. Companies like Hetzner and DataPacket have APIs to bring boxes up. Thereâs a delay, but you solve that by a bit of over provisioning. Very very few companies have work loads that are so bursty and irregular that they need full limitless up and down scaling. Thatâs one of those niche problems everyone thinks they have.
The uptime promise is false in my experience. Cloud goes down for cluster upgrades and any myriad other reasons just as often as self managed stuff. Iâve seen serious unplanned outages with cloud too. I donât have hard numbers but I would definitely wager that if cloud is better for uptime at all itâs not enough of an improvement to justify that gigantic markup.
For what cloud charges I should, as the deploying user, receive five nines without having to think about it ever. It does not deliver that, and it makes me think about it a lot with all the complexity.
The only technical promise it makes good on, and it does do this well, is not losing data. Theyâve clearly put more thought into that than any other aspect of the internal architecture. But thereâs other ways to not lose data that donât require you to pay a 10X markup on compute and a 10000X markup on transfer.
I think the real selling point of cloud is blame.
When cloud goes down, itâs not your fault. You can blame the cloud provider.
IT people like it, and itâs usually not their money anyway. Companies like it. Theyâre paying through the nose for the ability to tell the customer that the outage is Amazonâs fault.
Cloud took over during the ZIRP era anyway when money was infinite. If you have growth raise more. COGS doesnât matter.
With cloud, what you're really paying for is flexibility and scalability. You might not need either for your applications. At some startups, we needed it. We sized clusters wrong, needed to scale up in hours. This is something we wouldn't ever be able to do with our own hardware without tons of lead time.
If your application won't ever require more resources than a single server or two, then you are better off looking at other alternatives.
When I teach, I use "big data" for data that won't fit in a single machine. "Small data" fits on a single machine in memory and medium data on disk.
Having said that duckDB is awesome. I recently ported a 20 year old Python app to modern Python. I made the backend swappable, polars or duckdb. Got a 40-80x speed improvement. Took 2 days.
A bit of a moving target there, especially with the definition of medium data on disk considering the rise of high speed NVMe vs spinning metal. Makes me wonder if the 00s 'Big Data' era and the resulting infra is largely just outdated now...
The funny thing is that those days you can fit 64 TB of DDR5 in a single physical system (IBM Power Server), so almost all non data-lake-class data is "Small data".
I'm curious - what were you doing that polars was leaving a 40-80x speedup on the table? I've been happy with it's speed when held correctly, but it's certainly easy to hold it incorrectly and kill your perf if you're not careful
Polars is fastest when you avoid eager eval mid-pipeline. If you see a 40x gap it's often from calling .collect() inside a loop or applying Python UDFs row-wise.
Might be tangential but in my recent experience polars kept crashing the python server with OOM errors whenever I tried to stream data from and into large parquet files with some basic grouping and aggregation.
Claude suggested to just use DuckDB instead and indeed, it made short work of it.
as a broke ecologist, this little computer can do everything I need in R and word and is a phenomenal build for the price. I'm really enjoying it thus far.
Props for identifying the issue immediately, but armed with that knowledge, why not redo the benchmark on a different instance type that has local storage? E.g. why not try a `c8id.2xlarge` or `c8id.4xlarge` (which bracket the `c6a.4xlarge`'s cost)?
Do they make any promises about persistence of local NVMe after something like a full-region power outage yet?
Because if you can't do durable commit on a single-region cluster that will be just temporarily unavailable without loosing committed data if something like that happened, it's not quite there unless you still stream a WAL to storage that they do promise you will survive a full blackout of all zones that store (part of) the data.
Idk how an AWS region would respond to a power outage, but i have tested this in AWS Outpost, and there, if you power down a rack, then power it back again, the baremetal instances will not be recreated. (I was surprised as I was expecting the EC2 health check to terminate them, but it does not work like that.)
My understanding is that if you stop/start an instance, your local storage is gone (as the instance might even end up in a different host), but if you just reboot the instance, it should keep the local storage.
Worth noting the c8gd local NVMe is ephemeral so you'd need to pre-stage the data each run, but for a benchmark like this that's actually ideal since you avoid EBS cold-read artifacts entirely.
the laptop is gonna have some local code, maybe a lot, but if I'm doing legitimate "big data" that data is living i the cloud somewhere, and the laptop is just my interface.
Set up the machine yesterday. Everything runs just fine. Will use it mainly for academic writing, and light development work, only conceptual work, PoCs.
The DuckDB team benchmarked with an r7i.16xlarge which uses EBS - that's the expected bottleneck. A fairer comparison would be an i4i or c8gd with local NVMe, where you'd likely see the laptop and cloud instance much closer in practice.
On a MacBook, one can download a data set, reboot, install updates, etc and still have the dataset. Those nice-ish AWS instances will wipe their local storage if they are stopped. Sure, one needs backups, but this is still annoying.
Also, at on-demand prices, three months of continuous usage of a single c8gd.2xlarge will pay for that MacBook Neo. The MacBook Neo has a larger SSD than the AWS instances. To be fair, the MacBook Neo has seriously nerfed external IO bandwidth, so the c8gd.2xlarge will outperform it in networking. That being said, I think that any other Mac in the current lineup will utterly smoke c8gd.2xlarge if you are willing to use Thunderbolt-connected network adapters.
Given how little power modern Macs use, a little closet full of Macs with a decent network switch will easily run on a single 20A circuit and will perform better than quite a few thousands of dollars per month of AWS products. Sadly, youâre kind of stuck on MacOS (which is not actually a fantastic server OS) and the management tools are poor. Oh, well.
Funny just yesterday I almost bought one but got cold feet and opted for a low range MacBook with M5 chip. The Apple sales rep was not convinced it would be enough when i described using it for vibecoding and deploying so kind of talked me out of getting the Neo. I normally use a mix of LLMs, then connect to Github and do a one-click deploy on CreateOS. Do you think I over-reacted? The price of the Neo is SO attractive, a clean half price compared to what I got.
I think youâll be quite a bit happier. Between the quality of life stuff like the ancient life sensor, the pure quality stuff like a better screen and speakers, and extra RAM so it lasts longer that seems like a good decision.
The Neo is neat and for someone who mostly does surfing and standard office work kind of stuff I suspect itâs a pretty great little laptop for way less than Apple usually charges.
But itâs not going to compete with an M5 anything.
Imho 8GB RAM for productivity can quickly be restrictive. I used an M1 with 8GB and my current Macbook is M2 with 16GB, and to me the difference feels bigger than 2x. It seems not everyone here feels that way, but I'd say there's a reason Apple bumped the base models to 16 and makes that exclusive to non-Neo models.
I suspect the Neoâs A-series chip wipes the floor with a Pi.
Iâm really surprised just how competitive it was in their benchmark. I was expecting âsure it doesnât compete but it works and you can use itâ, not âit beat an Amazon instance, though not a really powerful oneâ.
Indeed, it would have been interesting but I really wanted to get the blog post out on the launch day of the MacBook Neo and did not have the bandwidth to run additional cloud experiments.
I ran TPC-DS SF300 now on the c6a.4xlarge. It turns out that it's still quite limited by the EBS disk's IO: while 32 GB memory is much more than 8 GB, DuckDB needs to spill to disk a lot and this shows on the runtimes. Running all 99 queries took 37 minutes, so about half of the MacBook's 79 minutes.
> Command being timed: "duckdb tpcds-sf300.db -f bench.sql"
> Percent of CPU this job got: 250%
> Elapsed (wall clock) time (h:mm:ss or m:ss): 37:00.96
Those speeds on the Pro/Max are impressive though, more in line with Gen5 NVMe drives. Those have been available in desktops for some time but AFAIK the controllers are still much too hot and power hungry for laptops, so I think Apple's custom controller is actually the first to practically hit those speeds on mobile.
Iâm guessing so many devs started out on 32gb MacBooks that the NEO seems underpowered. but it wasnât too long ago that 8gb, 1500mb/sec IO & so many cores was an elite machine.
I did a lot of dev work on a glorified eePC Chromebook when my laptop was damaged. You donât need a lot of ram to run a terminal.
Iâm hoping NEO resets the baseline testing environment so developers get back to shipping software that doesnât monopolize resources. âPlays nice with othersâ should be part of the software developerâs creed.
Queue the endless blog posts about running tech on the potato macbook and being stunned itâs functional with massive trade-offs. Groundbreaking stuff.
Trying DuckDB on lower-end Macbooks does show you dont need much muscle for moderate-size analytics. Long term it isnt cost-effective compared to budget laptops but its super simple for self-contained pipelines. The thing is 8GB RAM leaves you stuck once your data actually grows past the marketing demo.
Yes you're right. I meaned a different video, but I can't find it right now.
I've looked it up, and back then MacOS had a bug which exacerbated that issue.
Here is an article
Fantastic tear down. Thank you. Amazing for Apple. I hope this is the trend going forward but probably not. But still a gazillion screws? I just replaced the keyboard for my old hp elitebook with two screws.
It seems like theyâre starting to learn the cost of being too integrated.
Theyâve slowly been moving towards making it easier to repair individual broken parts. Iâm very happy to see that a new keyboard doesnât require replacing the entire top case. That was just crazy.
I agree I donât think itâs going to be something people really do.
I just thought it was neat. Itâs a phone chip, weâve never been able to do stuff like this on an Apple phone chip before. No one was porting this to the iPhone to run there.
In my mind this is purely a curiosity article, and I like that.
I think the form factor is basically the same (maybe slightly thicker) as a Macbook Air. It's basically an Air with lower performance in most dimensions.
You'd be surprised. There are many of us analysts in the third world who are paid pennies and expected to build large-scale exec dashboards from nontrivial data - with no cloud support whatsoever. ETL has to be local from hundreds of GBs of csv dumps.
I think it's partly tongue in cheek, because when "big data" was over hyped, everyone claimed they were working with big data, or tried to sell expensive solutions for working with big data, and some reasonable minds spoke up and pointed out that a standard laptop could process more "big data" than people thought.
> For our first experiment, we used ClickBench, an analytical database benchmark. ClickBench has 43 queries that focus on aggregation and filtering operations. The operations run on a single wide table with 100M rows, which uses about 14 GB when serialized to Parquet and 75 GB when stored in CSV format.
Processing data that cannot be processed on a single machine is fundamentally a different problem than processing data that can be processed on a single machine. It's useful to have a term for that.
As you say, single machines can scale up incredibly far. That just means 16 TB datasets no longer demand big data solutions.
I get your point, but I donât know if big data is the right term anymore.
Many people like to think they have big data, and you kinda have to agree with them if you want their money. At least in consulting.
Also you could go well beyond a 16TB dataset on a single machine. You assume that the whole uncompressed dataset has to fit in memory, but many workloads donât need that.
How many people in the world have such big datasets to analyse within reasonable time?
I think the definition of big is smaller than that. Mine was "too big to fit on a maxed-out laptop", effectively >8TB. Our photo collection is bigger than that, it's not 'big data'.
Or one could define it as too big to fit on a single SSD/HDD, maybe >30TB. Still within the reach of a hobbyist, but too large to process in memory and needs special tools to work with. It doesn't have to be petabyte scale to need 'big data' tooling.
>Can I expect good performance from the MacBook Neo with Slack, Microsoft Office, and Google Chrome signed into Atlassian and a CRM, all running simultaneously?
No.
>Do I reject a world where all of the above is necessary to realize value from an entry-level MacBook?
Iâve been tempted to buy one and do âreal dev workâ on it just to show people itâs not this handicapped little machine.
I built multiple iOS apps and went through two start up acquisitions with my M1 MBA as my primary computer, as a developer. And the neo is better than the M1 MBA. I edited my 30-45 min long 4k race videos in FCP on that air just fine.
> I built multiple iOS apps and went through two start up acquisitions with my M1 MBA as my primary computer, as a developer. And the neo is better than the M1 MBA. I edited my 30-45 min long 4k race videos in FCP on that air just fine.
Before I was a professional software developer, I used a scrawny second-hand laptop with a Norwegian keyboard (I'm not Norwegian) because that was what I could afford: https://i.imgur.com/1NRIZrg.jpeg
This was the computer I was developing PHP backends on + jQuery frontends, and where I published a bunch of projects that eventually led to me getting my first software development job, in a startup, and discovering HN pretty much my first day on the job :)
The actual hardware you use seems to me like it matters the least, when it comes to actually being able to do things.
I still manage and develop my php/jquery saas product on a 2011 27" iMac running Linux Mint, with an SSD being the only upgrade. Runs better than most new windows machines. No complaints.
I switch between Thinkpad T420s and PineBook Pro for all the hobby work.
T420s has loose USB ports and the power socket is almost falling off, so I plan to replace it by a 5 years old T14 G2 in the coming months.
I can afford the latest MacBook, but I'd rather not generate more e-waste that there is, and more importantly I feel closer to my users, and my code is efficient and straight to the point.
My non-hobby laptop is an old cheap Dell from 5-6 years ago.
The best laptop I ever had was a maxed-out Thinkpad P7x, and it came with the most meaningless job ever.
I can only compare that job to the one at a unicorn that gave me the latest and greatest MacBook. Not only the job was meaningless, the whole industry made no sense to me.
I wrote 99% of a large PHP app on an six year old laptop with a single 17" LCD. Meanwhile, at my desk, I had a Dell workstation with 3 monitors at the time, but it was easier to squirrel away in a corner somewhere, undisturbed.
After all, the actual server ran the code, I just needed text editors, terminal windows, and web browsers.
I started my business back in 2006 with an ancient 306 laptop - it was practically free, it ran VIM just fine, and that was all I needed it to do to crank out PHP until the cows came home.
Your hardware matters quite a bit if you're doing lower level things and the architecture is not the same as you're developing for. But apparently HN is all web devs
I just spent vacation deciding not to bring a laptop, but to use my android phone (a galaxy s22) with a hdmi adapter and Bluetooth travel keyboard. Plugged it in to the TV in our accomodation and had a lot of fun.
Running neovim on termux was fine. Developing elixir was no problem, the test suite took 5s on my phone, and takes 1s on my laptop. Rust and cargo compiling was slow enough that I didn't really enjoy it though.
Meant that I could just pack up instantly and have an agent do review workflows while I was out and about as well in my pocket, and didn't really notice a big battery hit.
Interesting vacation activities.
I don't really enjoy compiling rust on my M2 Pro, so I wouldn't necessarily blame the phone
Local LLMs and compiling rust are the only two things that I have seen saturate my M4 Max, hahaha.
And nowadays we have Debian running in a VM on Android [1]
[1] https://www.zdnet.com/article/how-to-use-the-new-linux-termi...
I'm glad enough people got M1 MacBook Airs now that the broader sentiment within the commentariat is changing and people are pushing back on the dismissals.
8gb has ALWAYS been fine in Apple Silicon Mac OS. RAM usage on a fresh boot is a meaningless statistic (unused RAM is wasted RAM). And they're just plain capable!
It's starting to show its age, but I've been using a 2019 MacBook Pro with the Intel chip and 16GB of memory. Still handles multiple terminal sessions with Claude Code and Codex simultaneously, building in Xcode, running Docker in the background, etc.
(Maybe the fans sometimes sound like they're a jet engine taking offâŚ)
Finally just put an order in for a new 16" MBP M5 Max with 48GB memory only because it looks like they're going to stop supporting the Intel stuff this year and no more software updates. It'll probably be obsolete in six months with the rate things are going, but I've been averaging seven years between upgrades so it should be good!
Oh my. All I have to say is cherish the first week of your M* experience. :D When I got rid of my intel MBP (it was an i7) for my MBA it was astonishing how fast and smooth it was.
So, the m5 with 48gb of ram will be amazing.
Hah. yeah. I went from a 64GB i9 to a 32GB M2, and it was night and fucking day
I agree. It was utterly ridiculous how noticeable the improvement was. I was doing z3 solving for ICFP contest the first couple weeks after getting the m1 air. And it was consistently smoking my teammates maxed out i7 MBP
Still using that 2019 MBP (16 inch, 16G memory) as a daily work machine.
Is still handling the load well, though at times, fans get quite loud, especially with all the background processes and VM setups.
Hope to get a new MBP this year, as being on Intel means lots of software that won't run on it (ie, Codex app for example, won't run on Intel Macs)
Well, Claude Cod and Codex should be doing most of their heavy lifting in the cloud?
Sort of, they have no "hands", LLMs can only respond that they want to execute a tool/command. So they do that a lot to: read files, search for things, compile projects, run tests, run other arbitrary commands, fetch stuff from the internet etc.
Obviously the LLM inference is super heavy, but the actual work / task at hand is being executed on the device.
The AI part yes. But they also use quite inefficient rendering on the cli.
yeah I run Claude code on a 2013 Mac book air that refuses to die, I don't think it's very compute heavy.
I use a 2015 MacBook Pro all the time--like right now. It does have 16GB of memory. It's what sits on my dining room table where I do most of my writing/browsing and which I take for travel. I do have an Apple Silicon MacBook Pro in my office but my downstairs "office" is a lot lighter and airier.
I use a 2015 MacBook Pro all the time--like right now.
I have a 2010 MacBook Air that I still use when traveling.
The battery is completely shot, but it works fine when plugged in. And if I'm on the road, I don't use my computer until I get to the hotel anyway. And even then, it's just fine for e-mail, browsing, and even Photoshop.
I have one of these (it's my only Mac), but it only has 2GB of RAM, so it's kinda rough. I tried Mint on it, but IIRC it might not have the GPU drivers? I just bought it a new SSD which helped a bit.
I think this one had a battery replacement because it was bulging. But it's definitely in the class of devices that, if it gets swiped or lost, is basically in the <ehh> category as opposed to my newer one.
Am probably giving newish iPad and magnetic keyboard a spin on my next trip mostly to see how it goes.
Try running Teams on it though!
Former employee of mine had the 2019 MBP as well. After a few years he had the same problem with the fans -- if you haven't already, pop it open and clean the fans and vents. You'll probably need a little brush along with compressed air. Lots of stuff comes up on Google. Great machine btw. Good luck!
Thanks for this tip! The fans of mine have been spinning up regularly, especially noticeable when I upgraded to Tahoe a few days ago.
I was using a M1 Mac Mini and only 8GB of RAM on it to build iOS apps for maybe a year. It's absolutely doable, though it very noticeably gets a little less snappy when building projects. When building in Xcode and then switching to Firefox to browse for instance, I could tell it took slightly longer to switch tabs and YouTube playback would occasionally stutter if too much was happening.
I also was using an Intel MacBook Pro with 16GB at the time. Doing the same thing there was much smoother and snappier. On the whole, it actually made me want to just the laptop instead since it "felt" nicer. (This isn't measuring build times or anything like that, just snappiness of the OS.)
People usually forget 8GB isn't 8GB. Memory compression means you can store ~2x (lz4) to 3x (zstd) as much data in memory as ordinarily. And in the worst case, reading swap from disk (writes don't matter as they can be predicted) is so much faster with NVMe SSDs.
The worst corner they cut is no keyboard backlighting. That saves them what, $1 BoM per MacBook Neo? Especially because now they have to put up an entire new keyboard production line instead of just piggybacking off of the Air keyboard production line.
I just retired my m1 air to being a server this month. Theyâre very capable laptops. If the neo is even comparable in spec itâs excellent for the price
My m1 air with 1TB ssd and 16GB of ram is a little champion, I use it during travel to play indie games like Hades II or Slay the Spire, and it works really well, better than my Steam Deck which broke. The only issue it really has is when I try to plug it into my docking station it struggles mightily with 2 2K screens and a 4K screen, so I just use my desktop in that case.
I am jealous of my wifeâs 13â M5 iPad Pro though, that oled screen is gorgeous, a wonder of modern engineering.
> [...] and it works really well, better than my Steam Deck which broke.
Well, the MacBook Air was also a lot more expensive than the Steam Deck?
The M1 also consumes more power.
I just bought a second hand M1 64GB as my main work laptop, haha. They definitely are capable laptops
Yeah! My M1 air is now my iOS build server since GH actions bill macOS mins at 10x the price.
How do you use M1 Air as iOS build server. Is 8G sufficient for only doing iOS builds? Do you connect to it remotely?
Couls you please describe your dev process.
It works out pretty okay for me, I do it since GH runners are very expensive and I have my own hardware so why not. https://docs.github.com/en/actions/concepts/runners/self-hos...
I setup a self hosted runner and then use that in my CI workflows. Then I disabled it from sleeping so it can clamshell forever and now it sits here in my living room silently workin' https://imgur.com/a/EaBICdo
why does GH actions bill macOS minis 10X?
Ah sorry minutes, they bill the most for macOS probably because of what a pain it is to scale it with apples EULA (I'm guessing) https://docs.github.com/en/billing/reference/actions-runner-...
ah ok, minutes makes more sense. thanks.
Mins here being short for minutes, not minis.
And, presumably for a combination of the Mac build (and hardware) being of niche interest and sitting outside the standard Linux workflows so it's annoying to administer. And serving a money-making audience (iOS app devs) who have a revenue stream and see the extra CI cost as worth it.
What is a macOS miniâŚ
Not âminiâ, âminsâ -> minutes.
It will do real work fine. But slack and a browser will bring it to its knees.
I have an older 8GB MacBook Air. This is false. I routinely have Slack, Chrome, iTerm, Visual Studio Code, and more open on it. Itâs fine.
Those apps donât need every single byte of memory you see in Activity Monitor to be active in RAM all of the time. The OS swaps out unused parts to the very fast SSD. If you push it so far that active pages are constantly being swapped out as apps compete then you start to notice, but the threshold for that is a lot higher than HN comments seem to think.
It really isn't. It is a capable machine but modern software has made it a lemon. And that is the only reason apple sells it. So that whoever buys it needs to buy another one prematurely, generating another sale.
Everything from apple to modern software is rotten to its core.
It really isn't.
âŚin reply to someone who just said their experience is fine, and included details. If you just want to rant about Apple, have at it, but youâre going to have to do better than ânuh, uhâ if you want to be convincing.
Well I could say that it isn't enough for vscode alone. And I'd be right. It all depends on how and what you use vscode for.
8GB really shouldn't be an option in 2026, it is just shortsighted and an insanely uneven build.
I could rant about Dell too. Or most other manufacturers (surprise, greed isn't apple exclusive). But Apple at least tries to keep the appearance of a higher profile.
Well I could say that it isn't enough for vscode alone. And I'd be right. It all depends on how and what you use vscode for.
Fair enough; though experience says 8Gb will run VScode, it would very much depend on the use case, I agree. OTOH, I would argue that anyone working VScode that hard probably isnât buying 8Gb machines, but OP did say theyâre running it so itâs up for discussion.
Iâm sick to death of this. Itâs so devoid from reality in 2026 that I see it as a lowest common denominator populist political catchphrase more than any legitimate contributor to any conversation. My min spec MacBook Pro from 6 years ago doesnât flinch at this, and it barely flinches at a whole lot more.
Can we please just move on? Maybe get your hardware checked if youâre legitimately still having these issues.
Trust me, so am I. And I am dead serious.
Only if you insist on running the standalone slack app for some reason. Why run one instance of Chrome when you can pay for two?
I've been finding it hard to wean myself off the standalone app but another major reason to do so is opening threads in separate tabs. I find as soon as I'm involved in two or more conversations on there it's super easy to start losing track of things.
You donât have an 8GB Apple Silicon MacBook, so you? So why did you post?
Maybe if you have 100 browser tabs or something silly like that?
A couple YouTube tabs are enough if you leave them running for long enough. Just one YT browser process will easily take up 1-4GB sooner or later.
Or it wonât because Chrome and MacOS will know how much RAM is available and manage it effectively.
I am talking from experience with an M1 & 8GB RAM. I had to restart either the browser or the YouTube browser processes at least once every couple days to stop the whole system from lagging.
A couple Facebook Marketplace tabs that have videos of the item for sale would absolutely crush my 2017 MacBook Pro.
My M1 Air would slow down a little, but was still usable doing the same thing. And they both had 8GB of memory.
While I agree with your statement, I don't think judging one's way of working and using their computer was necessary.
I could have two browser windows open in the late 1990s. I have about a thousand times as much RAM now. So even with 10x more bloat in the pages, I should be able to open 200 tabs just fine.
I wrote a fix for node that got upstreamed a few years ago on a Lenovo Thinkpad 3 Chromebook. I'm actually commenting from it now. It's not a workhorse by any means, but for $99, it's not bad. A 1.1GHz Celeron processor with 4GB of memory is able to compile projects like node, python, Erlang, etc. without much hassle. It just takes a lunch break :)
Any modern Mac is more than capable. I had the baseline M1 Macbook Air that I did work on as well, just to see how that fared. Much better than this machine - 10x the price, but more than 10x the performance. This one is great as a "I don't mind if I break it or lose it" device.
I was doing Android development and Verilog synthesis on a mobile Nehalem i5 in 2020. That machine is still totally adequate for anything a "normal person" does with their computer, provided they have good tab hygeine. The reality is that (unless you play video games and/or you want local LLM inference) the demands people place on their computers haven't changed significantly in at least 10 years.
cool humble brag
I know it's not really related, but how did you manage to build two startups worth getting acquired in such a short period of time?
Oh that made it seem like I was the driving factor. Maybe for the first one (Percy.io) I can claim a large part of that success (owning the SDKs and support end to end).
The other I just owned the front end infra and was on the growth team. The rest of the folks were the stars on that one.
Edit: I guess I brought that up because I guess I don't know any more "real work" that that, ha. What is 'real work'?
Doubt this is getting answered :)
Just did :p
It would have been a better fit for me than the M4 Air, I literally use it only for typing and browsing, plus a could of Mac-only tools. Brilliant machine but complete overkill for me. It's almost tempting to switch just to get rid of the display notch.
I'm still doing iOS dev on my 2020 M1 MPB, and it's fine! I expect that if I change out its battery and apply new thermal paste it would run for another 6 years.
Can you say a little more about what you mean by "better"? How much faster is editing?
Better in terms of raw specs. The original M1 Air also came with 8GB of RAM, and the A18 Pro in the Neo is faster than the version of the M1 that shipped in the base model Air
Would say get one with a fan, my small react native app building/indexing in xcode takes several minutes on a 2020 M1 macbook air
But damn I like that design
The argument is misrepresented - I think it's about frustration and convenience, not achievability.
I developed some work that keeps tens of thousands of people alive every day on a $100 Acer netbook almost 15 years ago. The tools are always there, I don't think anyone thinks the work is actually impossible to do on a limited machine.
most dev workflows from pre 2021 can probably run just fine on a NEO - i think once you get into conductor / 8 terminals with claude code territory thatâs where things start to slow down
i just got an m5 max with 128gb of ram specifically to run local llms
Does Claude Code take up that many local resources? I thought the heavy lifting was in the cloud?
Claude Code still runs things on your local machine. So if you have some pretty expensive transpilation, or resolving dependency trees that needs musl recompilation, or doing something rust, you still need a reasonable ammount of local firepower. More so if you're running multiple instances of them.
Heh, I also upgraded to run local LLMs. As a tiny aside, codex does not burn resources like CC does.
Itâs fine to if you donât have any memory hogging apps. But as soon as you fire up a couple demanding Docker containers youâll feel the pain. 8GB isnât so much RAM for some applications.
Why do you think people buying the cheapest MacBook available will be running Docket? Do you commonly run Docker containers on the cheapest Windows laptop available? Why not?
> just to show people itâs not this handicapped little machine
I used to think this way about Apple and its jarring to read with it 10-15 years behind me.
It reads as aggro and oddly tribalistic / sports fan-y.
(what people? who thinks its slower than an M1? who thinks you can't code on it? what will you coding on it prove to these people that the benchmarks they read can't? with all that, why get so invested you're buying a machine you don't want to use day to day? what does "handicapped" mean in this context?)
Only sharing b/c I never understood why people would roll their eyes at me, and apparently I finally reached my own graybeard moment, and I am now rolling my eyes at both of my selves :)
> Iâve been tempted to buy one and do âreal dev workâ on it just to show people itâs not this handicapped little machine.
But... you can do the same exercise with a $350 windows thing. Everyone knows you can do "real dev work" on it, because "real dev work" isn't a performance case anymore, hasn't been for like a decade now, and anyone who says otherwise is just a snob wanting an excuse to expense a $4k designer fashion accessory.
IMHO the important questions to answer are business side: will this displace sales of $350 windows machines or not, and (critically) will it displace sales of $1.3k Airs?
HN always wants to talk about the technical stuff, but the technical stuff here isn't really interesting. The MacBook Neo is indeed the best laptop you can get for $6-700.
But that's a weird price point in the market right now, as it underperforms the $1k "business laptops" (to avoid cannibalizing Air sales) and sits well above the "value laptop" price range.
No, you can't do real work on a $350 windows machine. No way such a setup is suitable for anything beyond browsing a tab or two and connecting to servers using SSH.
And, the whole shittiness of the experience will even distract you attempting real work: the horrible touchpad, the bad screen, the forced windows updates when you trying to start the machine to do something urgent, ads in Windows, the lack of proper programmability of Windows (unless you use WSL).... Add the fact that the toy is likely to break in a year or two. These issue exist on far more expensive Windows machines, how much more a $350 machine.
Leaving Windows machines and OS behind for more than a decade has been a continuing breath of fresh air. I have several issues with the Apple devices and macOS (as I have with Linux too), but on the whole they are far better than Windows. The only good thing about Windows that I miss on Macs is the file explorer and window management, not sure why Apple stubbornly refuses to copy those.
A lot of $350-ish Windows machines also donât have SSDs but instead eMMC storage, which is dog slow and will make modern SSD-mandatory Windows feel even more awful to use.
If Windows/Linux/x86 is non-negotiable and thatâs your budget, I would never in a million years recommend anything brand new. This is when you go pick up a $350 used midrange ThinkPad on eBay. It wonât outperform a Neo in terms of CPU and battery life but I guarantee itâll be a better experience than the garbage routinely sold at this price point.
Of course you can. You can do real work on an $80 Amazon Fire. Yes, some things will be potentially impossible or frustrating but that's also true of the MacBook Neo, just a bit higher of a bar. A lot of this also depends on your definition of "real work".
$350 USD can get you a decent laptop with a SSD, 16GB RAM and something like an Intel N100 or N95. And they pretty comparable to a decent Intel Skylake CPU which are still pretty usable.
https://www.amazon.com/NIAKUN-Computer-Processor-Keyboard-Fi...
https://www.amazon.com/AOC-Computer-Processor-Laptops-Window...
Yes, the Neo has a faster CPU but it also has less RAM and less storage and costs more and has less ports. Besides ray traced games what can the Neo do that the others can't? They'll take longer but they'll get there.
And if you're willing to go used? That $350 goes a lot further.
> Yes, the Neo has a faster CPU but it also has less RAM and less storage and costs more and has less ports.
8GB on Apple Silicon is far better than 16 GB on Wintel, and I don't event trust the quality of 16GB of RAM on a bottom of the barrel Windows machine.
Would you prefer a machine that is still good 7 years from now with less ports, or one with more ports that you have to replace in 2 years? Yes it is more expensive now, but over 7 years it is an absolute bargain.
16 GB physical RAM is just better. Apple isn't magic. Gimme a break. Both devices have SSDs for fast swapping and have RAM compression. You can't spin up a VM that has 8GB RAM on the Neo, you can't load a large spreadsheet or do a decently sized digital painting. I could maybe buy a claim that 8GB is better on Mac than 8GB on Windows.
Why would you have to replace it in 2 years? How do we know Apple will even be offering updates to Neo in 7 years? Will 8GB still be usable in 7 years really? 8GB is barely on the fence already.
I wouldn't be surprised if Apple drops the Neo from software support in less than 7 years.
> No, you can't do real work on a $350 windows machine.
Sigh. I mean, even absent the obvious answers[1], that's just wrong anyway. You're being a snob. Want to run WSL? Run WSL. Want to run vscode natively? Ditto. Put it on a cheap TV and run your graphical layout and 3D modelling work. I mean, obviously it does all that stuff. OBVIOUSLY, because that stuff is all cheap and easy.
All the complaining you're doing is about preference, not capability. You're being a snob. Which is hardly weird, we're all snobs about something.
But snobs aren't going to buy the Neo either. Again, the business question here is whether the $350 junk users can be convinced to be snobs for $600.
[1] "Put Linux on it", "All of your stuff is in the cloud anyway", "It's still a thousand times faster than the machine on which I did my best work", etc...
You mean that machine from 30 years ago that was running 30 year old software that has nothing in common with todayâs development? And how well does Linux run on 4GB?
So weird to see this kind of flaming more than a decade after it got stale and silly. I mean, yeah, kinda: a 64MB K6-300 was pretty great!
But as to the 4G quip, that's showing some ignorance of where the market is. The value segment is filled with devices like this: https://www.amazon.com/HP-Stream-BrightView-N4120-Graphics/d...
That's a 16G windows box which will happily run multiple VMs for whatever your deployment environment is, something the Neo is actually going to struggle with. The Jasper Lake CPU is indeed awfully slow, but again for routine "dev" tasks that's just not a limit.
You would obviously refuse out of taste, but if you were actually forced to use this machine to do your job... you absolutely could.
But this has no real SSD. Back to external SSD like on Apple devices?
I run a full AI operations stack on an M4 Mac Mini â ClawdBot (Claude), OBS streaming a 24/7 WebGL simulation, Chrome for browser automation, 16 cron jobs, the whole thing. $599 machine.
Reality check: it works remarkably well for AI agent orchestration. The unified memory architecture means the agent, browser, and streaming can coexist without the memory wall you'd hit on x86. But running OBS alongside everything else does make it laggy â I've got an M5 MacBook Air (32GB) incoming and I'm planning to swap the Mini for a 64GB model to give more headroom.
For anyone considering Apple Silicon as an AI dev machine: the sweet spot is 64GB unified memory minimum if you want to run an agent + browser automation + anything else simultaneously. 32GB works but you feel the pressure. The M-series efficiency means you can leave it running 24/7 without worrying about your power bill, which matters when your AI agent literally never sleeps.
What does your AI agent do on it?
This is as much an indictment of AWS compute as it is anything else.
Kinda comparing apples to oranges. AWS was using EBS and not local instance storage. So youâre easily looking at another order of magnitude latency when transmitting data over the network versus a local pcie bus. Thatâs gonna be a huge factor in what I assume is a heavy random seek load.
I wrote a longer comment already (https://news.ycombinator.com/item?id=47352526) but looking at the hot run performance and making big hand wavy guesses, the performance difference might not be as big as you'd expect.
The article is literally saying the opposite. Quote:
> Here's the thing: if you are running Big Data workloads on your laptop every day, you probably shouldn't get the MacBook Neo.
> All that said, if you run DuckDB in the cloud and primarily use your laptop as a client, this is a great device
But AWS beat the laptop? And there's no cost to performance analysis? Yes AWS is overpriced but how do you make that conclusion from this specific article? Because network disks were slower than SSDs? AWS also has SSD instances with local storage.
I haven't tried the newer I7i and I8g instance types (the newest instances with local storage) for myself, but AWS claims "I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances."
I benchmarked I4i at ~2GB/s read, so let's say I7i gets 3GB/s. The Verge benchmarked the 256GB Neo at 1.7GB/s read, and I'd expect the 512GB SSD to be faster than that.
Of course, an application specific workload will have its own characteristics, but this has to be a win for a $700 device.
It's hard to find a comparable AWS instance, and any general comparison is meaningless because everybody is looking at different aspects of performance and convenience. The cheapest I* is $125/mo on-demand, $55/mo if you pay for three years up front, $30/mo if you can work with spot instances. i8g.large is 468GB NVMe, 16GB, 2 vCPUs (proper cores on graviton instances, Intel/AMD instance headline numbers include hyperthreading).
Yeah, this is really about how ludicrously overpriced big cloud is. Iâve got a first gen M1 Max and it destroys all but the largest cloud instances (that cost its entire current market value per month!), at least in compute. Itâs a laptop! A decent bare metal server in a rack will destroy any laptop.
Itâs staggering. Jaw dropping. Bandwidth is even worse, like 10000X markup.
Yet cloud is how we do things. Thereâs a generation or maybe two now of developers who know nothing but cloud SaaS.
I watched everyone fall for it in real time.
I agree and disagree, the benefit with cloud is you "don't need to manage it", it scales automatically, redundancy, and automatic backups etc. I do think you are right; in the future there will be more infrastructure as code as cost pressures become more obvious.
Those benefits are at least partly lies though.
The tooling â K8S with all its YAML, Terraform, Docker, cloud CLI tools, etc. â is pretty hideously ugly and complicated. I watch people struggle to beat it into shape just like they did with sysadmin automation tools like Puppet and Chef a decade or more ago. We have not removed complexity, only moved it.
The auto scaling thing is a half truth. It can do this if you deploy correctly but the zero downtime promise is only true maybe half the time. It also does this at greatly inflated cost.
Today you can scale with bare metal. Nobody except huge companies physically racks anymore. Companies like Hetzner and DataPacket have APIs to bring boxes up. Thereâs a delay, but you solve that by a bit of over provisioning. Very very few companies have work loads that are so bursty and irregular that they need full limitless up and down scaling. Thatâs one of those niche problems everyone thinks they have.
The uptime promise is false in my experience. Cloud goes down for cluster upgrades and any myriad other reasons just as often as self managed stuff. Iâve seen serious unplanned outages with cloud too. I donât have hard numbers but I would definitely wager that if cloud is better for uptime at all itâs not enough of an improvement to justify that gigantic markup.
For what cloud charges I should, as the deploying user, receive five nines without having to think about it ever. It does not deliver that, and it makes me think about it a lot with all the complexity.
The only technical promise it makes good on, and it does do this well, is not losing data. Theyâve clearly put more thought into that than any other aspect of the internal architecture. But thereâs other ways to not lose data that donât require you to pay a 10X markup on compute and a 10000X markup on transfer.
I think the real selling point of cloud is blame.
When cloud goes down, itâs not your fault. You can blame the cloud provider.
IT people like it, and itâs usually not their money anyway. Companies like it. Theyâre paying through the nose for the ability to tell the customer that the outage is Amazonâs fault.
Cloud took over during the ZIRP era anyway when money was infinite. If you have growth raise more. COGS doesnât matter.
Maybe cloud is ZIRPslop.
Not all IaC is Kubernetes.
With cloud, what you're really paying for is flexibility and scalability. You might not need either for your applications. At some startups, we needed it. We sized clusters wrong, needed to scale up in hours. This is something we wouldn't ever be able to do with our own hardware without tons of lead time.
If your application won't ever require more resources than a single server or two, then you are better off looking at other alternatives.
Honestly I think the best path is hybrid with the cloud as DR and sudden load scaling.
Metal with data streamed to cloud and cloud as hot backup is something some people already do.
If the metal dies in a catastrophic way (multiple nodes at once and loss of quorum, catastrophic DC outage, etc.) you spin it up in AWS.
When I teach, I use "big data" for data that won't fit in a single machine. "Small data" fits on a single machine in memory and medium data on disk.
Having said that duckDB is awesome. I recently ported a 20 year old Python app to modern Python. I made the backend swappable, polars or duckdb. Got a 40-80x speed improvement. Took 2 days.
A bit of a moving target there, especially with the definition of medium data on disk considering the rise of high speed NVMe vs spinning metal. Makes me wonder if the 00s 'Big Data' era and the resulting infra is largely just outdated now...
The funny thing is that those days you can fit 64 TB of DDR5 in a single physical system (IBM Power Server), so almost all non data-lake-class data is "Small data".
And a single machine can hold petabytes of disk for medium scale. There aren't many datasets exceeding that outside fundamental physics.
I'm curious - what were you doing that polars was leaving a 40-80x speedup on the table? I've been happy with it's speed when held correctly, but it's certainly easy to hold it incorrectly and kill your perf if you're not careful
20 year old BI app. Columnar DBs weren't really a thing. (MonetDB was brand new but not super stable. I committed the SQLAlchemy interface to it.)
Polars is fastest when you avoid eager eval mid-pipeline. If you see a 40x gap it's often from calling .collect() inside a loop or applying Python UDFs row-wise.
App is now lazy!
Might be tangential but in my recent experience polars kept crashing the python server with OOM errors whenever I tried to stream data from and into large parquet files with some basic grouping and aggregation.
Claude suggested to just use DuckDB instead and indeed, it made short work of it.
Hah, I wish people who are saying 'can you even do anything with 8GB in 2026' would read posts like this.
as a broke ecologist, this little computer can do everything I need in R and word and is a phenomenal build for the price. I'm really enjoying it thus far.
I take it you're researching clams? Or you happen to like clams a lot?
Where I live, our government-funded clam research programs are mostly shutting down. Very sad.
How did you get one already? I thought they were just up for pre-order
Shipping started yesterday, meaning preorders would already have arrived then
Mine started shipping on 8 March to arrive on the 11 March release date.
yea, preordered.
This is awesome.
I wish more companies would do showcases like this of what kind of load you can expect from commodity-ish hardware.
I adore DuckDB.
Did a PoC on a AWS Lambda for data that was GZ'ed in a s3 bucket.
It was able to replace about 400 C# LoC with about 10 lines.
Amazing little bit of kit.
This might be a buy for me once it is fully supported by Linux. Hopefully, the muscle memory of Ctrl, Super and Alt won't get in my way.
If SPTM is active on the chip, we are not going to be getting Linux at all.
> The cloud instances have network-attached disks
Props for identifying the issue immediately, but armed with that knowledge, why not redo the benchmark on a different instance type that has local storage? E.g. why not try a `c8id.2xlarge` or `c8id.4xlarge` (which bracket the `c6a.4xlarge`'s cost)?
I would have benchmarked with an instance that has local nvme, like c8gd.4xlarge.
Do they make any promises about persistence of local NVMe after something like a full-region power outage yet? Because if you can't do durable commit on a single-region cluster that will be just temporarily unavailable without loosing committed data if something like that happened, it's not quite there unless you still stream a WAL to storage that they do promise you will survive a full blackout of all zones that store (part of) the data.
Yes. They promise to wipe your data. That SLA has all the nines you can ask for as long as you measure it in the right direction :)
You already lose your data after instance restart so I think that full region outage is already out of question.
Idk how an AWS region would respond to a power outage, but i have tested this in AWS Outpost, and there, if you power down a rack, then power it back again, the baremetal instances will not be recreated. (I was surprised as I was expecting the EC2 health check to terminate them, but it does not work like that.) My understanding is that if you stop/start an instance, your local storage is gone (as the instance might even end up in a different host), but if you just reboot the instance, it should keep the local storage.
Worth noting the c8gd local NVMe is ephemeral so you'd need to pre-stage the data each run, but for a benchmark like this that's actually ideal since you avoid EBS cold-read artifacts entirely.
I think itâs relevant to first read [1] to see why theyâre doing this. Itâs basically done as a meme.
[1] https://motherduck.com/blog/big-data-is-dead/
> An alternate definition of Big Data is âwhen the cost of keeping data around is less than the cost of figuring out what to throw away.â
That couldn't be more accurate
That's not Big Data. If you "need to process Big Data on the move" - what you need is a network.
aye.
the laptop is gonna have some local code, maybe a lot, but if I'm doing legitimate "big data" that data is living i the cloud somewhere, and the laptop is just my interface.
Set up the machine yesterday. Everything runs just fine. Will use it mainly for academic writing, and light development work, only conceptual work, PoCs.
The DuckDB team benchmarked with an r7i.16xlarge which uses EBS - that's the expected bottleneck. A fairer comparison would be an i4i or c8gd with local NVMe, where you'd likely see the laptop and cloud instance much closer in practice.
On a MacBook, one can download a data set, reboot, install updates, etc and still have the dataset. Those nice-ish AWS instances will wipe their local storage if they are stopped. Sure, one needs backups, but this is still annoying.
Also, at on-demand prices, three months of continuous usage of a single c8gd.2xlarge will pay for that MacBook Neo. The MacBook Neo has a larger SSD than the AWS instances. To be fair, the MacBook Neo has seriously nerfed external IO bandwidth, so the c8gd.2xlarge will outperform it in networking. That being said, I think that any other Mac in the current lineup will utterly smoke c8gd.2xlarge if you are willing to use Thunderbolt-connected network adapters.
Given how little power modern Macs use, a little closet full of Macs with a decent network switch will easily run on a single 20A circuit and will perform better than quite a few thousands of dollars per month of AWS products. Sadly, youâre kind of stuck on MacOS (which is not actually a fantastic server OS) and the management tools are poor. Oh, well.
Funny just yesterday I almost bought one but got cold feet and opted for a low range MacBook with M5 chip. The Apple sales rep was not convinced it would be enough when i described using it for vibecoding and deploying so kind of talked me out of getting the Neo. I normally use a mix of LLMs, then connect to Github and do a one-click deploy on CreateOS. Do you think I over-reacted? The price of the Neo is SO attractive, a clean half price compared to what I got.
Why do you need an M5 to run Cursor and a browser? Your laptop isn't doing anything in your described workflow.
I think youâll be quite a bit happier. Between the quality of life stuff like the ancient life sensor, the pure quality stuff like a better screen and speakers, and extra RAM so it lasts longer that seems like a good decision.
The Neo is neat and for someone who mostly does surfing and standard office work kind of stuff I suspect itâs a pretty great little laptop for way less than Apple usually charges.
But itâs not going to compete with an M5 anything.
Imho 8GB RAM for productivity can quickly be restrictive. I used an M1 with 8GB and my current Macbook is M2 with 16GB, and to me the difference feels bigger than 2x. It seems not everyone here feels that way, but I'd say there's a reason Apple bumped the base models to 16 and makes that exclusive to non-Neo models.
If you have doubts and you have the money, why worry about it?
Would it not also work on a raspberry.
With I/O streaming and efficient transformation I do big data on my consumer PC and good old cheap HDDs just fine.
I suspect the Neoâs A-series chip wipes the floor with a Pi.
Iâm really surprised just how competitive it was in their benchmark. I was expecting âsure it doesnât compete but it works and you can use itâ, not âit beat an Amazon instance, though not a really powerful oneâ.
IO on a raspi is pathetic. As packaged itâs 100 times slower , with a M.2 hat itâs still 5x slower
For the TPC-DS results it would also have been nice to show how the macbook neo compares to the AWS instances.
Or am I missing something?
Indeed, it would have been interesting but I really wanted to get the blog post out on the launch day of the MacBook Neo and did not have the bandwidth to run additional cloud experiments.
I ran TPC-DS SF300 now on the c6a.4xlarge. It turns out that it's still quite limited by the EBS disk's IO: while 32 GB memory is much more than 8 GB, DuckDB needs to spill to disk a lot and this shows on the runtimes. Running all 99 queries took 37 minutes, so about half of the MacBook's 79 minutes.
> Command being timed: "duckdb tpcds-sf300.db -f bench.sql"
> Percent of CPU this job got: 250%
> Elapsed (wall clock) time (h:mm:ss or m:ss): 37:00.96
> Maximum resident set size (kbytes): 25559652
> compared to 3â5 GB/s
Their numbers are a bit outdated. M5 Macbook pro SSDs are literally 5x this speed. It's wild.
I'm seeing ~6GB/sec: https://www.tomshardware.com/laptops/macbooks/m5-macbook-pro...
That's decently fast but not especially remarkable, most Gen4 NVMe drives can hit 6-7GB/sec.
To be clear, that article is about the base m5, not the m5 pro or m5 max.
https://www.apple.com/newsroom/2026/03/apple-introduces-macb...
"The new MacBook Pro delivers up to 2x faster read/write performance compared to the previous generation reaching speeds of up to 14.5GB/s..."
OP did just say M5 (implying the base model)
Those speeds on the Pro/Max are impressive though, more in line with Gen5 NVMe drives. Those have been available in desktops for some time but AFAIK the controllers are still much too hot and power hungry for laptops, so I think Apple's custom controller is actually the first to practically hit those speeds on mobile.
Interesting. Do you have a link?
> TL;DR: How does the latest entry-level MacBook perform on database workloads? We benchmarked it to find out.
That's not tldr, that's just subheader.
Thank you! I was going to say the same thing. It doesnât give mr an overview at all
You're right! I pushed an updated TL;DR block.
If you can fit it on a thumb drive, it's not Big Data.
That c8g.metal-48xl instance costs $7.63008 on demand[1], so for the price of the laptop, you could run queries on it for about ~90 hours.
:shrug: as to whether that makes the laptop or the giant instance the better place to do one's workâŚ
[1] https://aws.amazon.com/ec2/pricing/on-demand/
âBig dataâ doesnât have a 5gb memory cap.
Iâm guessing so many devs started out on 32gb MacBooks that the NEO seems underpowered. but it wasnât too long ago that 8gb, 1500mb/sec IO & so many cores was an elite machine.
I did a lot of dev work on a glorified eePC Chromebook when my laptop was damaged. You donât need a lot of ram to run a terminal.
Iâm hoping NEO resets the baseline testing environment so developers get back to shipping software that doesnât monopolize resources. âPlays nice with othersâ should be part of the software developerâs creed.
I'm interested by one (not for big data) but only 8 GB or RAM is kinda really sad.
My good old LG Gram (from 2017? 2015? don't even remember) already had 24 GB of RAM. That was 10 years ago.
A decade later I cannot see myself being a laptop with 1/3rd the mem.
Did your LG Gram cost $450 (to make for $600 in today's money) in 2015-17?
If it didn't, Apple has other laptops today with more RAM.
Queue the endless blog posts about running tech on the potato macbook and being stunned itâs functional with massive trade-offs. Groundbreaking stuff.
That usage is "Cue", not "queue".
Cue the queue of blogs! Trigger the formation of a line of posts to be published sequentially.
this has a phone CPU/memory
other test:
2025-09-08 : "Big Data on the Move: DuckDB on the Framework Laptop 13"
"TL;DR: We put DuckDB through its paces on a 12-core ultrabook with 128 GB RAM, running TPC-H queries up to SF10,000."
https://duckdb.org/2025/09/08/duckdb-on-the-framework-laptop...
Mind blown, if you need to handle "big" data on the move - the macbook neo is not the right choice. - Who would have guessed that outcome?
It occurs to me that there is near zero overlap between people who use a Macbook Neo and people who run DuckDB locally.
It would be a surprise if more than 0.1% of Macbook Neo users have even heard of DuckDB.
Which means that this article is probably just riding the hype.
Trying DuckDB on lower-end Macbooks does show you dont need much muscle for moderate-size analytics. Long term it isnt cost-effective compared to budget laptops but its super simple for self-contained pipelines. The thing is 8GB RAM leaves you stuck once your data actually grows past the marketing demo.
Canât give up and admit that 8GB of RAM is enough, can you?
I think you completely missed the point.
People buy Macbook Neo because they "just need a laptop" or are budget conscious.
I imagine a student would get their hands wet with Postgre before looking at DuckDB or similar.
It would be a surprise if they do heavy workloads with DuckDB. In which case it's definitely worth investing in a more powerful computer.
That's an awesome idea to get a bricked MacBook Neo really fast because those idiots soldered the SSD inside
Apple has been soldering the SSD into MacBooks for over 10 years now, and most 10 year old MacBooks still have a working SSD.
Not if you're powerusing it like in the Article and relying heavily on Swap.
Also there are countless reports of bricked M1 8GB MacBook Airs that are bricked because the SSD used up it's write cycles
https://youtu.be/0qbrLiGY4Cg?si=mjKn2oLjqAb36hPU
That's not what the video insinuates.
Yes you're right. I meaned a different video, but I can't find it right now. I've looked it up, and back then MacOS had a bug which exacerbated that issue. Here is an article
https://www.macrumors.com/2021/02/23/m1-mac-users-report-exc...
You originally stated "Also there are countless reports of bricked M1 8GB MacBook Airs that are bricked because the SSD used up it's write cycles"
Do you have a source for these "countless bricked SSD's"?
Here was the Video I meant back then.
https://m.youtube.com/watch?v=MZuv4TIjk-I&pp=ygURZGVhZCBNYWN...
Not sure about the ssd in particular but the neo is apparently pretty modular
https://www.youtube.com/watch?v=5k7Lv7f-5CQ
Fantastic tear down. Thank you. Amazing for Apple. I hope this is the trend going forward but probably not. But still a gazillion screws? I just replaced the keyboard for my old hp elitebook with two screws.
I don't care about a gazillion screws, if it's serviceable in the End.
If Apple would build their laptops serviceable like ThinkPads I would buy one today.
It seems like theyâre starting to learn the cost of being too integrated.
Theyâve slowly been moving towards making it easier to repair individual broken parts. Iâm very happy to see that a new keyboard doesnât require replacing the entire top case. That was just crazy.
Seems completely unnecessary, there is probably 0 overlap between people who buy a cheap MacBook and people running DuckDB locally
I agree I donât think itâs going to be something people really do.
I just thought it was neat. Itâs a phone chip, weâve never been able to do stuff like this on an Apple phone chip before. No one was porting this to the iPhone to run there.
In my mind this is purely a curiosity article, and I like that.
I've used MacBook Airs as primary dev machines multiple times in my career (before Apple silicon, when Airs had truly shit performance).
There is always a trade-off of cost/convenience/power, and some folks are going to end up the the Neo end of the spectrum.
I love small form factors, and I am what youd call a professionel :P
I think the form factor is basically the same (maybe slightly thicker) as a Macbook Air. It's basically an Air with lower performance in most dimensions.
You'd be surprised. There are many of us analysts in the third world who are paid pennies and expected to build large-scale exec dashboards from nontrivial data - with no cloud support whatsoever. ETL has to be local from hundreds of GBs of csv dumps.
Itâs necessary because the ignorant keep saying 8GB of RAM is a deal breaking limitation on the cheapest MacBook available.
Oh great, the term "big data" is back.
So my definition of big data was data so big it cannot be processed on a single machine in a reasonable amount of time.
I guess theyâre using a different definition?
I think it's partly tongue in cheek, because when "big data" was over hyped, everyone claimed they were working with big data, or tried to sell expensive solutions for working with big data, and some reasonable minds spoke up and pointed out that a standard laptop could process more "big data" than people thought.
> For our first experiment, we used ClickBench, an analytical database benchmark. ClickBench has 43 queries that focus on aggregation and filtering operations. The operations run on a single wide table with 100M rows, which uses about 14 GB when serialized to Parquet and 75 GB when stored in CSV format.
very much soâŚ
In my former life as a soulless consultant mid-level IT managers really liked to hear the 3 "V"s mentioned: Velocity, Volume, Variety
The V of Value is very important in some circles.
Computers got bigger and software got smarter.
You have phones that are faster than cloud VMs of the past. You can use bare metal servers with up to 344 cores and 16TB of ram.
I used to share your definition too, but I now say that if it doesnât open in Microsoft Excel, itâs big data.
Processing data that cannot be processed on a single machine is fundamentally a different problem than processing data that can be processed on a single machine. It's useful to have a term for that.
As you say, single machines can scale up incredibly far. That just means 16 TB datasets no longer demand big data solutions.
I get your point, but I donât know if big data is the right term anymore.
Many people like to think they have big data, and you kinda have to agree with them if you want their money. At least in consulting.
Also you could go well beyond a 16TB dataset on a single machine. You assume that the whole uncompressed dataset has to fit in memory, but many workloads donât need that.
How many people in the world have such big datasets to analyse within reasonable time?
Some people say extreme data.
I think they are simply referring to analytical workloads.
âYour data isnât bigâ is a good working definition of big data.
Google has big data. You are not google.
I think the definition of big is smaller than that. Mine was "too big to fit on a maxed-out laptop", effectively >8TB. Our photo collection is bigger than that, it's not 'big data'.
Or one could define it as too big to fit on a single SSD/HDD, maybe >30TB. Still within the reach of a hobbyist, but too large to process in memory and needs special tools to work with. It doesn't have to be petabyte scale to need 'big data' tooling.
>Can I expect good performance from the MacBook Neo with Slack, Microsoft Office, and Google Chrome signed into Atlassian and a CRM, all running simultaneously?
No.
>Do I reject a world where all of the above is necessary to realize value from an entry-level MacBook?
In theory, yes.