I worked on Finder/TimeMachine/Spotlight/iOS at Apple from 2000-2007. I worked closely with Bas Ording, Stephen Lemay, Marcel van Os, Imran Chaudry, Don Lindsey and Greg Christie. I have no experience with any of the designers who arrived in the post-Steve era. During my time, Jony Ive didn't figure prominently in the UI design, although echoes of his industrial design appeared in various ways in the graphic design of the widgets. Kevin Tiene and Scott Forstall had more influence for better or worse, extreme skeumorphism for example.
The UX group would present work to Steve J. every Thursday and Steve quickly passed judgement often harshly and without a lot of feedback, leading to even longer meetings afterward to try and determine course corrections. Steve J. and Bas were on the same wavelength and a lot of what Bas would show had been worked on directly with Steve before hand. Other things would be presented for the first time, and Steve could be pretty harsh. Don, Greg, Scott, Kevin would push back and get abused, but they took the abuse and could make in-roads.
Here is my snapshot of Stephen from the time. He presented the UI ideas for the intial tabbed window interface in Safari. He had multiple design ideas and Steve dismissed them quickly and harshly. Me recollection was that Steve said something like No, next, worse, next, even worse, next, no. Why don't you come back next week with something better. Stephen didn't push back, say much, just went ok and that was that. I think Greg was the team manager at the time and pushed Steve for more input and maybe got some. This was my general observation of how Stephen was over 20 years ago.
I am skeptical and doubtful about Stephen's ability to make a change unless he is facilitated greatly by someone else or has somehow changed drastically. The fact that he has been on the team while the general opinion of Apple UX quality has degraded to the current point of the Tahoe disaster is telling. Several team members paid dearly in emotional abuse under Steve and decided to leave rather than deal with the environment post Steve's death. Stephen is a SJ-era original and should have been able to push hard against what many of us perceive as very poor decisons. He either agreed with those decisions, or did not, and choose to go with the flow and enjoy the benefits of working at Apple. This is fine I guess. Many people are just fine going with the flow and not rocking the boat. It may be even easier when you have Apple-level comp and benefits.
My opinon; unless Stephen gets a very strong push from other forces, I don't see that he has the will or fortitude to make the changes that he himself has approved in one way or another. Who will push him? Tim Cook, Craig Federighi, Eddy Cue, Phil Schiller? The perceived mess of Tahoe happened on the watch of all of these Apple leaders.
It's a choice. I go to the supermarket twice a week, not shopping for much. I switched the store I use three, four months ago, but I can already talk about some of the employees at the store I visit. Louis is back where he grew up right now because his 97-year-old grandfather died. Among other things, he feels lucky grandpa's passing came after the new year because of his time-off allotment. Nikki had great holidays, mostly because her adult daughter was here for a week. Nadine ("Shh.") has decided she's going to retire at the end of the month but hasn't yet told anyone at the store.
Raffy, the UPS delivery guy I see maybe five times a year? He's doing well, finally feeling things slowing down some after the holidays. His fiancé will finish her graduate degree this spring, then they're going to decide if they want to stay here or move back to the state where they were born. They like it here, but think job opportunities will be better back home.
I'm sure many here are familiar with "This is Water," the commencement speech given by David Foster Wallace. Many often cite his line, "Everybody worships," his observation that we all hold aspects of life in reverence, whether religious things or otherwise. It's a valid, pithy point, but I always thought the key part to his speech comes later and has been widely overlooked:
And the so-called real world will not discourage you from operating on your default settings, because the so-called real world of men and money and power hums merrily along in a pool of fear and anger and frustration and craving and worship of self. Our own present culture has harnessed these forces in ways that have yielded extraordinary wealth and comfort and personal freedom. The freedom all to be lords of our tiny skull-sized kingdoms, alone at the centre of all creation. This kind of freedom has much to recommend it. But of course there are all different kinds of freedom, and the kind that is most precious you will not hear much talk about much in the great outside world of wanting and achieving.... The really important kind of freedom involves attention and awareness and discipline, and being able truly to care about other people and to sacrifice for them over and over in myriad petty, unsexy ways every day.
He delivered that speech in 2005. Before the modern smartphone. All those people I mentioned earlier were strangers. That's no longer the case because all of us chose to interrupt what we were doing and open up a little to someone unfamiliar. It's a choice. Or, as Bob Dylan once sang,
Hey! I created Jeff Dean Facts! Not the jokes themselves, but the site that collected them.
It was in 2008 I think (give or take a year, can't remember). I worked at Google at the time. Chunk Norris Facts was a popular Internet meme (which I think later faded when he came out as MAGA, but I digress...). A colleague (who wishes to remain anonymous) thought the idea of Jeff Dean Facts would be funny, and April 1st was coming up.
At the time, there was a team working on an experimental web app hosting platform code named Prometheus -- it was later released as App Engine. Using an early, internal build I put together a web site where people could submit "facts" about Jeff Dean, rate each other's facts on a five-star scale, and see the top-rated facts. Everything was anonymous. I had a few coworkers who are funnier than me populate some initial facts.
I found a few bugs in Prometheus in the process, which the team rapidly fixed to meet my "launch date" of April 1st. :)
On the day, which I think was a Sunday, early in the morning, I sent an email to the company-wide "misc" mailing list (or maybe it was eng-misc?) from a fake email address (a google group alias with private membership), and got the mailing list moderator to approve it.
It only took Jeff an hour or two to hack his way through the back-end servers (using various internal-facing status pages, Borg logs, etc.) to figure out my identity.
But everyone enjoyed it!
My only regret is that I targeted the site specifically at Jeff and not Sanjay Ghemawat. Back then, Jeff & Sanjay did everything together, and were responsible for inventing a huge number of core technologies at Google (I have no idea to what extent they still work together today). The site was a joke, but I think it had the side effect of elevating Jeff above Sanjay, which is not what I intended. Really the only reason I targeted Jeff is because he's a bit easier to make fun of personality-wise, and because "Jeff Dean Facts" sort of rolls off the tongue easier that "Sanjay Ghemawat Facts" -- but in retrospect this feels a little racist. :(
My personal favorite joke is: Jeff Dean puts his pants on one leg at a time, but if he had more than two legs, you'd see his approach is actually O(log n).
I currently have 20,097 tabs open in one browser profile. The oldest tab appears to be an HN post from 2.5 years ago, which must be the last time I swept tabs into bookmarks.
I used to sweep them more regularly, but Firefox + Sidebery don't even break a sweat with 20K tabs, apparently, so why bother?
The only downside is that it takes about 15 seconds for the browser to launch. I restart the browser whenever Firefox or macOS is updated, so every week or two.
I was lost, literally, hitchhiking across the Australian outback when this article was published. Going home felt scary because I was afraid to be alone with no one else sharing my interests. Travelling made life enjoyable again because just surviving felt like an achievement. But I felt so, so isolated (again, literally!) from modern society. I wanted to find out why I was so deeply interested in computers but not in “tech”. They must work somehow… why did my iPhone (sold that) feel similar to my PC (sold that too) but only one is called a computer? This article framed things in a way that shook me out of a physically dangerous, homeless, jobless rut. It was all code. And I could learn it if I had the time.
Perhaps it was the way it was written; I couldn’t believe intrigue and passion of computing could be weaved together like this. But there it was.
I did make it home eventually. Fortunately the first 2000km lift back from western Australia to the eastern states with a crystal meth addict on the run from the police didn’t end violently. A few weeks back in Sydney with family some Linux nerds found me working as a receptionist answering phones and scanning paper records in at a failing medical practice. They got me doing desktop Windows and Linux server support. I’m an official software engineer now. I guess I should print this article out to show to my kids!
A lot of this work was done by Walter |2| Costinak. He was an absolute legend and he's still doing a bit of design work today. I know because he did the branding for my last company and product. I worked with him a lot at Gathering of Developers back in the day. Together we rebuilt the website for Take 2 Games and they used our work for well over decade before doing a redesign. If you like this style, I recommend you reach out to him. Here's his website:
For some reason the article made me think about this quote from one of the 2025 MacArthur Fellowship videos, "I think there are some mathematicians who are kind of like the hiker who choose this massive peak they want to scale and they do everything they can to make it up the mountain. I'm more like the kind of hiker who wanders through the forest and stops to look at a pretty stone or flower and reflect on whether it's similar to a stone or flower that I've seen before."
> When BGP traffic is being sent from point A to point B, it can be rerouted through a point C. If you control point C, even for a few hours, you can theoretically collect vast amounts of intelligence that would be very useful for government entities. The CANTV AS8048 being prepended to the AS path 10 times means there the traffic would not prioritize this route through AS8048, perhaps that was the goal?
AS prepending is a relatively common method of traffic engineering to reduce traffic from a peer/provider. Looking at CANTV's (AS8048) announcements from outside that period shows they do this a lot.
Since this was detected as a BGP route leak, it looks like CANTV (AS8048) propagated routes from Telecom Italia Sparkle (AS6762) to GlobeNet Cabos Sumarinos Columbia (AS52320). This could have simply been a misconfiguration.
Nothing nefarious immediately jumps out to me here. I don't see any obvious attempts to hijack routes to Dayco Telecom (AS21980), which was the actual destination. The prepending would have made traffic less likely to transit over CANTV assuming there was any other route available.
The prepending done by CANTV does make it slightly easier to hijack traffic destined to it (though not really to Dayco), but that just appears to be something they just normally do.
This could be CANTV trying to force some users of GlobeNet to transit over them to Dayco I suppose, but leaving the prepending in would be an odd way of going about it. I suppose if you absolutely knew you were the shortest path length, there's no reason to remove the prepending, but a misconfiguration is usually the cause of these things.
My dad grew up in the 50s & 60s. During COVID he purchased my daughters' the, I quote, "shittiest briefcase record players" he could find. Both girls listen to their music on their devices, but also buy vinyl. The other day, my eldest came down from her room complaining that her vinyl "sounded awful". I told her to bring it up with their Grampy. His response: "you can't appreciate good playback until you've heard awful playback on shitty record players like I had to.". My eldest is now plotting a complete hifi system, and is learning all about how to transfer "vinyl" to "digital" without losing the parts of the vinyl she likes.
So, who invented the Satellite then? What about the steam engine? The helicopter?
Sometimes the inventors are so far ahead of their time that the materials science first has to catch up (in some cases only a few millenia) before they can realize their devices. Effectively it is then the first person after whoever did the materials science part to create the device that gets to claim the invention.
So we get Sikorski, and not Da Vinci.
We get Arthur C. Clarke who claims the 'communications satellite' even though the moon was there all along and the Sputnik was the first working very crude device (it was one way only, it said 'you lost the space race' in a single bit of message).
We get Newcomen, Jerónimo de Ayanz y Beaumont (I had to look that up, I can never remember the man's full name), and Hero of Alexandria competing for the steam engine title, with all of them holding some part of the credit.
Pointing at an inventor is hard, and 'who built the first working device' is one way of doing this but it assumes a singular effort whereas most things are team efforts and misses the bit that the idea itself can be an instrumental step in getting your 'true' inventor to make their claim, standing on the shoulders of the giants before them. In isolation, we all probably would invent the hammer in our lifetimes, if that.
Don't miss how this works. It's not a server-side application - this code runs entirely in your browser using SQLite compiled to WASM, but rather than fetching a full 22GB database it instead uses a clever hack that retrieves just "shards" of the SQLite database needed for the page you are viewing.
I watched it in the browser network panel and saw it fetch:
It's reminiscent of that brilliant SQLite.js VFS trick from a few years ago: https://github.com/phiresky/sql.js-httpvfs - only that one used HTTP range headers, this one uses sharded files instead.
A few years back I patched the memory allocator used by the Cloudflare Workers runtime to overwrite all memory with a static byte pattern on free, so that uninitialized allocations contain nothing interesting.
We expected this to hurt performance, but we were unable to measure any impact in practice.
Everyone still working in memory-unsafe languages should really just do this IMO. It would have mitigated this Mongo bug.
That is fair, particularly compared to Janet Jackson! I will add detail.
In their younger days, two distinguished engineers, Bryan Cantrill and Brendan Gregg, made this video where they scream at a data storage server nicknamed Thumper. Screaming at it has surprising results, which are observed with a novel software technology called dtrace.
The Sun Fire X4500 was a dense storage sever, 4U with 48 disks and insane IO performance and a newish filesystem called ZFS. The video is not only funny in content, it features technology and technologists that became very impactful, hence the classic tag.
---
I love the lore, so I'll drop more.
While our team previously used AFS (mainly for its great caching) and many storage servers, this hardware combined with its software allowed us to consolidate and manage and access data in new ways, alleviating many of our market data analysis problems.
We switched to NFS, which previously was not performant enough for us on other hw/sw architectures. While using NFS with the Thumpers and then Thors (X4540) was fantastic, eventually the data scales became hard again and we made a distributed immutable filesystem that looked like the Hadoop HDFS and Cassandra file systems, named after our favorite Klingon Worf (Write-Once Read-Frequently).
Interestingly, in 2025 both XTX [1] and HRT [2] open-sourced their distributed file systems which are pretty similar to it, using 2020's tech rather than 2000's. HRT's is based on Meta's Tectonic which is a spiritual successor to Cassandra.
I wrote about our parallel HFT networking journey once upon a time on HN. [3]
A company adopts some software with a free but not copyleft license. Adopts means they declare "this is good, we will use it".
Developers help develop the software (free of charge) and the company says thank you very much for the free labour.
Company puts that software into everything it does, and pushes it into the infrastructure of everything it does.
Some machines run that software because an individual developer put it there, other machines run that software because a company put it there, some times by exerting some sort of power for it to end up there (for example, economic incentives to vendors, like android).
A some point the company says "you know what, we like this software so much that we're going to fork it, but the fork isn't going to be free or open source. It's going to be just ours, and we're not going to share the improvements we made"
But now that software is already running in a lot of machines.
Then the company says "we're going to tweak the software a bit, so that it's no longer inter-operable with the free version. You have to install our proprietary version, or you're locked out" (out of whatever we're discussing hypothetically. Could be a network, a standard, a protocol, etc).
Developers go "shit, I guess we need to run the proprietary version now. we lost control of it."
This is what happened e.g. with chrome. There's chromium, anyone can build it. But that's not chrome. And chrome is what everybody uses because google has lock-in power. Then google says "oh I'm going to disallow you running the extensions you like, so we can show you more ads". Then they make tweaks to chrome so that websites only get rendered well if they use certain APIs, so now competitors to Chrome are forced to implement those APIs, but those aren't public.
And all of this was initially build by free labour, which google took, by people who thought they were contributing to some commons in a sense.
Copyleft licenses protect against this. Part of the license says: if you use these licenses, and you make changes to the software, you have to share the changes as well, you can't keep them for yourself".
I'm taking a moment to recognize once more the work that user @atdrummond (Alex Thomas Drummond) did for a couple years to help others here. I did not know him, don’t think I ever interacted with him, and I did not benefit from his generosity, but I admired his kindness. Just beautiful.
If this had been available in 2010, Redis scripting would have been JavaScript and not Lua. Lua was chosen based on the implementation requirements, not on the language ones... (small, fast, ANSI-C). I appreciate certain ideas in Lua, and people love it, but I was never able to like Lua, because it departs from a more Algol-like syntax and semantics without good reasons, for my taste. This creates friction for newcomers. I love friction when it opens new useful ideas and abstractions that are worth it, if you learn SmallTalk or FORTH and for some time you are lost, it's part of how the languages are different. But I think for Lua this is not true enough: it feels like it departs from what people know without good reasons.
I'm the Manager of the Computing group at JILA at CU, where utcnist*.colorado.edu used to be housed. Those machines were, for years, consistently the highest bandwidth usage computers on campus.
Unfortunately, the HP cesium clock that backed the utcnist systems failed a few weeks ago, so they're offline. I believe the plan is to decommission those servers anyway - NIST doesn't even list them on the NTP status page anymore, and Judah Levine has retired (though he still comes in frequently). Judah told me in the past that the typical plan in this situation is that you reference a spare HP clock with the clock at NIST, then drive it over to JILA backed by some sort of battery and put it in the rack, then send in the broken one for refurb (~$20k-$40k; new box is closer to $75k). The same is true for the WWVB station, should its clocks fail.
There is fiber that connects NIST to CU (it's part of the BRAN - Boulder Research and Administration Network). Typically that's used when comparing some of the new clocks at JILA (like Jun Ye's strontium clock) to NIST's reference. Fun fact: Some years back the group was noticing loss due to the fiber couplers in various closets between JILA & NIST... so they went to the closets and directly spliced the fibers to each other. It's now one single strand of fiber between JILA & NIST Boulder.
That fiber wasn't connected to the clock that backed utcnist though. utcnist's clock was a commercial cesium clock box from HP that was also fed by GPS. This setup was not particularly sensitive to people being in the room or anything.
Another fun fact: utcnist3 was an FPGA developed in-house to respond to NTP traffic. Super cool project, though I didn't have anything to do with it, haha.
Agreed, which is why what GP suggests is much more sensible: it's venturing into known territory, except only one party of the conversation knows it, and the other literally cannot know it. It would be a fantastic way to earn fast intuition for what LLMs are capable of and not.
I wonder if you could query some of the ideas of Frege, Peano, Russell and see if it could through questioning get to some of the ideas of Goedel, Church and Turing - and get it to "vibe code" or more like "vibe math" some program in lambda calculus or something.
Playing with the science and technical ideas of the time would be amazing, like where you know some later physicist found some exception to a theory or something, and questioning the models assumptions - seeing how a model of that time may defend itself, etc.
I used to teach 19th-century history, and the responses definitely sound like a Victorian-era writer. And they of course sound like writing (books and periodicals etc) rather than "chat": as other responders allude to, the fine-tuning or RL process for making them good at conversation was presumably quite different from what is used for most chatbots, and they're leaning very heavily into the pre-training texts. We don't have any living Victorians to RLHF on: we just have what they wrote.
To go a little deeper on the idea of 19th-century "chat": I did a PhD on this period and yet I would be hard-pushed to tell you what actual 19th-century conversations were like. There are plenty of literary depictions of conversation from the 19th century of presumably varying levels of accuracy, but we don't really have great direct historical sources of everyday human conversations until sound recording technology got good in the 20th century. Even good 19th-century transcripts of actual human speech tend to be from formal things like court testimony or parliamentary speeches, not everyday interactions. The vast majority of human communication in the premodern past was the spoken word, and it's almost all invisible in the historical sources.
Anyway, this is a really interesting project, and I'm looking forward to trying the models out myself!
It is not just a way of writing ring buffers. It's a way of implementing concurrent non-blocking single-reader single-writer atomic ring buffers with only atomic load and store (and memory barriers).
The author says that non-power-of-two is not possible, but I'm pretty sure it is if you use a conditional instead of integer modulus.
I first learnt of this technique from Phil Burk, we've been using it in PortAudio forever. The technique is also widely known in FPGA/hardware circles, see:
"Simulation and Synthesis Techniques for Asynchronous
FIFO Design", Clifford E. Cummings, Sunburst Design, Inc.
When I published Grisu (Google double-conversion), it was multiple times faster than the existing algorithms. I knew that there was still room for improvement, but I was at most expecting a factor 2 or so. Six times faster is really impressive.
Having worked at Mozilla a while ago, the CEO role is one I wouldn't wish on my worst enemy. Success is oddly defined: it's a non-profit (well, a for-profit owned by a non-profit) that needs to make a big profit in a short amount of time. And anything done to make that profit will annoy the community.
I hope Anthony leans into what makes Mozilla special. The past few years, Mozilla's business model has been to just meekly "us-too!" trends... IoT, Firefox OS, and more recently AI.
What Mozilla is good at, though, is taking complex things the average user doesn't really understand, and making it palpable and safe. They did this with web standards... nobody cared about web standards, but Mozilla focused on usability.
(Slide aside, it's not a coincidence the best CEO Mozilla ever had was a designer.)
I'm not an AI hater, but I don't think Mozilla can compete here. There's just too much good stuff already, and it's not the type of thing Mozilla will shine with.
Instead, if I were CEO, I'd go the opposite way: I'd focus on privacy. Not AI privacy, but privacy in general. Buy a really great email provider, and start to own "identity on the internet". As there's more bots and less privacy, identity is going to be incredibly important over the years.. and right now, Google defacto owns identity. Make it free, but also give people a way to pay.
Would this work? I don't know. But like I said, it's not a job I envy.
As the first author of the salmon paper, yes, this was exactly our point. fMRI can be an amazing tool, but if you are going to trust the results you need to have proper statistical corrections along the way.
As the first author on the salmon paper, yes, that was exactly our point. Researchers were capitalizing on chance in many cases as they failed to do effective corrections to the multiple comparisons problem. We argued with the dead fish that they should.
There you can download it in high quality, and it’s a pay-what-you-want: you can get it for free if you want, or pay what you feel like and support me. Either way, I’m happy that you enjoy it!
The music should also be on Spotify, Apple Music, and most music streaming services within the next 24h.
A bit about the process of scoring Size of Life:
I’ve worked with Neal before on a couple of his other games, including Absurd Trolley Problems, so we were used to working together (and with his producer—you’re awesome, Liz!). When Neal told me about Size of Life, we had an inspiring conversation about how the music could make the players feel.
The core idea was that it should enhance that feeling of wondrous discovery, but subtly, without taking the attention away from the beautiful illustrations.
I also thought it should reflect the organisms' increasing size—as some of you pointed out, the music grows with them. I think of it as a single instrument that builds upon itself, like the cells in an increasingly complex organism. So I composed 12 layers that loop indefinitely—as you progress, each layer is added, and as you go back, they’re subtracted. The effect is most clear if you get to the end and then return to the smaller organisms!
Since the game has an encyclopedia vibe to it, I proposed to go with a string instrument to give it a subtle “Enlightenment-era” and “cultural” feel. I was suspecting the cello could be a good instrument because of its range and expressivity.
Coincidentally, the next week I met the cellist Iratxe Ibaibarriaga at a game conference in Barcelona, where I’m based, and she immediately became the ideal person for it. She’s done a wonderful job bringing a ton of expressivity to the playing, and it’s been a delight to work with her.
I got very excited when Neal told me he was making an educational game—I come from a family of school teachers. I’ve been scoring games for over 10 years, but this is the first educational game I’ve scored.
In a way, now the circle feels complete!
(if anyone wants to reach out, feel free to do so! You can find me and all my stuff here: https://www.aleixramon.com/ )
The odd thing about all of this (well, I guess it's not odd, just ironic), is that when Google AdWords started, one of the notable things about it was that anyone could start serving or buying ads. You just needed a credit-card. I think that bought Google a lot of credibility (along with the ads being text-only) as they entered an already disreputable space: ordinary users and small businesses felt they were getting the same treatment as more faceless, distant big businesses.
I have a friend that says Google's decline came when they bought DoubleClick in 2008 and suffered a reverse-takeover: their customers shifted from being Internet users and became other, matchingly-sized corporations.
One thing this really highlights to me is how often the "boring" takes end up being the most accurate. The provocative, high-energy threads are usually the ones that age the worst.
If an LLM were acting as a kind of historian revisiting today’s debates with future context, I’d bet it would see the same pattern again and again: the sober, incremental claims quietly hold up, while the hyperconfident ones collapse.
Something like "Lithium-ion battery pack prices fall to $108/kWh" is classic cost-curve progress. Boring, steady, and historically extremely reliable over long horizons. Probably one of the most likely headlines today to age correctly, even if it gets little attention.
On the flip side, stuff like "New benchmark shows top LLMs struggle in real mental health care" feels like high-risk framing. Benchmarks rotate constantly, and “struggle” headlines almost always age badly as models jump whole generations.
I bet theres many "boring but right" takes we overlook today and I wondr if there's a practical way to surface them before hindsight does
I downloaded the original article page, had claude extract the submission info to json, then wrote a script (by hand ;) to run feed each submission title to gemini-3-pro and ask it for an article webpage and then for a random number of comments.
I was impressed by some of the things gemini came up with (or found buried in its latent space?). Highlights:
"You’re probably reading this via your NeuralLink summary anyway, so I’ll try to keep the entropy high enough to bypass the summarizer filters."
"This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034)."
"Zig v1.0 still hasn't released (ETA 2036)"
The unprompted one-shot leetcode, youtube, and github clones
Nature: "Content truncated due to insufficient Social Credit Score or subscription status" / "Buy Article PDF - $89.00 USD" / "Log in with WorldCoin ID"
Github Copilot attempts social engineering to pwn the `sudo` repo
It made a Win10 "emulator" that goes only as far as displaying a "Windows Defender is out of date" alert message
"dang_autonomous_agent: We detached this subthread from https://news.ycombinator.com/item?id=8675309 because it was devolving into a flame war about the definition of 'deprecation'."
The UX group would present work to Steve J. every Thursday and Steve quickly passed judgement often harshly and without a lot of feedback, leading to even longer meetings afterward to try and determine course corrections. Steve J. and Bas were on the same wavelength and a lot of what Bas would show had been worked on directly with Steve before hand. Other things would be presented for the first time, and Steve could be pretty harsh. Don, Greg, Scott, Kevin would push back and get abused, but they took the abuse and could make in-roads.
Here is my snapshot of Stephen from the time. He presented the UI ideas for the intial tabbed window interface in Safari. He had multiple design ideas and Steve dismissed them quickly and harshly. Me recollection was that Steve said something like No, next, worse, next, even worse, next, no. Why don't you come back next week with something better. Stephen didn't push back, say much, just went ok and that was that. I think Greg was the team manager at the time and pushed Steve for more input and maybe got some. This was my general observation of how Stephen was over 20 years ago.
I am skeptical and doubtful about Stephen's ability to make a change unless he is facilitated greatly by someone else or has somehow changed drastically. The fact that he has been on the team while the general opinion of Apple UX quality has degraded to the current point of the Tahoe disaster is telling. Several team members paid dearly in emotional abuse under Steve and decided to leave rather than deal with the environment post Steve's death. Stephen is a SJ-era original and should have been able to push hard against what many of us perceive as very poor decisons. He either agreed with those decisions, or did not, and choose to go with the flow and enjoy the benefits of working at Apple. This is fine I guess. Many people are just fine going with the flow and not rocking the boat. It may be even easier when you have Apple-level comp and benefits.
My opinon; unless Stephen gets a very strong push from other forces, I don't see that he has the will or fortitude to make the changes that he himself has approved in one way or another. Who will push him? Tim Cook, Craig Federighi, Eddy Cue, Phil Schiller? The perceived mess of Tahoe happened on the watch of all of these Apple leaders.