6125 stories

Apple shipped me a 79-pound iPhone repair kit to fix a 1.1-ounce battery


I’m starting to think Apple doesn’t want us to repair them

Apple must be joking.

That’s how I felt again and again as I jumped through hoop after ridiculous hoop to replace the battery in my iPhone Mini. Part of that was the repair process — mostly, it was how difficult Apple makes it to even get there.

Last month, Apple launched its Self-Service Repair program, letting US customers fix broken screens, batteries, and cameras on the latest iPhones using Apple’s own parts and tools for the first time ever. I couldn’t wait. I’d never successfully repaired a phone — and my wife has never let me live down the one time I broke her Samsung Galaxy while using a hair dryer to replace the screen. This time, armed with an official repair manual and genuine parts, I’d make it right.

Two Pelican cases sit on a train platform, with a tiny red iPhone 13 mini next to them for scale. Photo by Sean Hollister / The Verge
A repair station in a box — or two.

That Apple would even let me buy those parts, much less read its manuals and rent its tools, is a major change of pace for the company. For years, Apple has been lobbying to suppress right-to-repair policies around the country, with the company accused of doing everything it can to keep customers from repairing their own phones. It’s easy to see this as a huge moment for DIY advocates. But having tried the repair process, I actually can’t recommend it at all — and I have a sneaking suspicion that Apple likes it that way.

The thing you should understand about Apple’s home repair process is that it’s a far cry from traditional DIY if you opt for the kit — which I did, once I saw the repair manual only contains instructions for Apple’s own tools. (You can just buy a battery if you want.)

I expected Apple would send me a small box of screwdrivers, spudgers, and pliers; I own a mini iPhone, after all. Instead, I found two giant Pelican cases — 79 pounds of tools — on my front porch. I couldn’t believe just how big and heavy they were considering Apple’s paying to ship them both ways.

I lugged those cases onto a BART train to San Francisco and dragged them down the streets to our office. Then, I set everything out on a table and got started.

 Photo by Sean Hollister / The Verge
Apple’s Self-Service Repair kit laid out on a table.

Step one of opening an iPhone is, basically, using a hefty machine to suck the screen off the top. Here, I wasn’t microwaving a jelly-filled sock to loosen the Apple goop holding my screen down! Apple lets you rent an industrial-grade heat station that looks like a piece of lab equipment, right down to the big red safety dial you twist to release the emergency-off button and the suction-cup-tipped mechanical lifting arm. It looks pretty cool.

 Photo by Thomas Blythe / The Verge
Hot pocket!

I slip my phone in a perfectly sized “heating pocket” that clamps a ring of copper around the iPhone’s band to evenly distribute the heat and melt the seal around the screen, realize in horror that I’ve invited the “Hot Pockets!” jingle to live in my head rent-free, then spin a dial to raise the arm that separates the iPhone’s screen from its body.

Or, that’s how it’s supposed to work, anyhow. The heating machine threw an error code partway through my first attempt, and Apple’s manual didn’t explain what to do if that happens after you’ve stuck your phone inside. So I wound up heating it twice in a row. And yet, that still wasn’t quite enough for my screen to “immediately” pop up when the suction cup arm began to lift the glass. The manual did cover that situation, making me spin a second hidden knob to put more pressure on the suction cup, but I started freaking out when I saw what looked like cracks spider across the screen. (It turned out it was just suction cup residue.)

 Still by Thomas Blythe / The Verge
Here’s the suction cup arm.

Once the screen was loose, I cut through the softened glue holding it to the iPhone’s frame with Apple’s single tiny adhesive cutter, which also gave me a little trouble. The blade got caught when I wedged it under the corners of the screen, and I had to yank it out without accidentally sending my phone clattering to the ground. The kit comes with a perfect-fit tray to hold your phone steady and extra suction cups to hold the screen without stretching the fragile ribbon cables but nothing to hold the tray itself.

 Image: Apple
The manual pictures a technician holding the screen with one hand while cutting adhesive with the other, but I also had to hold the tray to keep it from sliding around.

Apple also provides a set of fancy torque drivers to make sure you don’t screw down the phone’s tiny screws too tightly, but it’s a bit of a chore. I must have dropped Apple’s incredibly tiny fasteners a dozen times while removing the slivers of metal that hold the screen’s ribbon cables in place, as well as the bottom speaker that Apple makes you yank to get at the battery. Presumably just to make it more difficult to repair, Apple requires three different screwdriver bits just to remove the screen, and none of Apple’s bits are magnetized to keep the screws from slipping.

 Photo by Thomas Blythe / The Verge
The torque driver has several swappable bits, and you’ll need at least three.

At this point, there was still a bunch of goopy adhesive around the sides of my iPhone’s frame. While instructions suggest it’ll just peel off in a few big pieces if you pull with tweezers (which didn’t come in the box), I gave up after 10 minutes of picking away at tiny fuzzy blobs of glue. I was just going to be adding more adhesive anyway, after all. Later, I discovered this was not my best idea.

When it finally lay open on the table, I couldn’t help but gawk in wonder at my iPhone 13 Mini’s precisely packed guts, and I realized I was having fun! Slicing open my phone was a thrill. But a lot of that thrill came from not knowing whether my phone would survive surgery — Apple tools or no.

 Photo by Sean Hollister / The Verge
Still photos don’t do it justice — particularly not mine.

From there, it was time to swap out the battery. Once I finished cursing at the far too easy to tear tabs that held in the original lithium pack, I used Apple’s fancy battery press with a rolling arm to seat — but not squish — the new battery down. But I could have done that with my fingertips; I’d have much rather had a tool to properly align the battery, which I had to yank and reposition after plopping it down a millimeter too far south, or a tool to test whether you’ve properly reseated the battery and display connectors. But I’ll get to that.

Next, the instructions had me apply an actually helpful precut adhesive sheet designed to stick my screen back to the frame, which was easy to slot into exactly the right place and press down with my fingers. Then came a huge spring-loaded press (with a veritable slot machine of an arm) to close the phone once more. But even with the press, my screen wasn’t perfectly flush with the frame afterwards, perhaps due to the extra glue I didn’t manage to remove.

 Photo by Thomas Blythe / The Verge
Apple’s screen press is a one-armed bandit.

With my phone closed up again, I held down the power button. Nothing. No bright white Apple logo — no response at all. For one horrible moment, I realized Apple gave me no way to test whether the battery and display connectors were actually seated (they probably weren’t) and had me close up the phone anyhow.

Then, a forlorn hope: maybe the replacement battery shipped empty? I scrambled around the office for a Lightning cable, and… my iPhone 13 Mini finally lit up.

You can’t get Apple to validate your repair using your repaired phone — you need a separate device, an agent explained to me in chat Photo by Sean Hollister / The Verge
Sorry, Brian, you did your best.

But I wasn’t done yet. The single most frustrating part of this process, after using Apple’s genuine parts and Apple’s genuine tools, was that my iPhone didn’t recognize the genuine battery as genuine. “Unknown Part,” flashed a warning. Apparently, that’s the case for almost all of these parts: you’re expected to dial up Apple’s third-party logistics company after the repair so they can validate the part for you. That’s a process that involves having an entirely separate computer and a Wi-Fi connection since you have to reboot your iPhone into diagnostics mode and give the company remote control. Which, of course, defeats a bunch of the reasons you’d repair your own device at home!

And, if I’m telling you the truth, the second most frustrating part didn’t occur during the repair either. If it were just me, I’d have aborted the entire process before Apple ever shipped 79 pounds of equipment to my home.

It would be an understatement to say that Apple has a history of resisting right-to-repair efforts. Before the iPhone, replacing a battery was typically as easy as inserting a thumbnail to pop off your phone’s back cover; afterwards, phones largely became tricky to even open without specialized tools, which arguably pushed customers to replace their perfectly good devices when they might have only needed a new screen or battery. Also see: batterygate.

In recent years, the company has actively lobbied against right-to-repair legislation in at least 20 states, sneakily pushing California, as one example, to postpone its bill. (The bill died in committee again this very week.) Apple cracked down on unauthorized repairs by throwing warnings or even disabling features if you repair phones with non-“genuine” parts, though it walked some of that back after an outcry. And it put together a contract for indie repair shops that was reportedly so invasive, many refused to sign it.

So, it didn’t surprise me when Apple’s press release about the program warned “the vast majority of customers” away from their own repairs, or when I needed to enter my phone’s IMEI to prove I owned my phone, or how I had to enter a six-digit code to prove I read the repair manual, which not only suggests you need three pages worth of tools but also a jar of sand in case your battery catches fire — one of many not strictly necessary items that don’t come with the kit. Apple also only includes instructions on how to use its own special tools for repairs, so you’re on your own if you want to try a more low-key or inexpensive DIY approach.

Yeah, none of that surprised me. What surprised me was the price tag.

  • $69 for a new battery — the same price the Apple Store charges for a battery replacement, except here I get to do all the work and assume all the risk.
  • $49 to rent Apple’s tools for a week, more than wiping out any refund I might get for returning the old used part.
  • A $1,200 credit card hold for the toolkit, which I would forfeit if the tools weren’t returned within seven days of delivery.

Let’s be clear: this is a ridiculous amount of risk for the average person who just wants to put a new battery in their phone. And it’s frankly weird for Apple to insist on you covering the full value of the tools. “It’s not like when you rent a car they make you put down $20,000 as a safety deposit,” my colleague Mitchell Clark points out.

I should also mention the Pelican cases landed at my door two days before the battery arrived, so I only had five days to do the job before that $1,200 deadline.

The fine print basically says I have seven days to return the toolkit or I might be out twelve hundred bucks. Screenshot by Sean Hollister / The Verge
My shopping cart. Get a load of the fine print.

The more I think about it, the more I realize Apple’s Self-Service Repair program is the perfect way to make it look like the company supports right-to-repair policies without actually encouraging them at all. Apple can say it’s giving consumers access to everything, even the same tools its technicians use, while scaring them away with high prices, complexity, and the risk of losing a $1,200 deposit. This way, Apple gets credit for walking you through an 80-page repair, instead of building phones where — say — you don’t need to remove the phone’s most delicate components and two different types of security screws just to replace a battery.

To me, those giant Pelican cases are the proof. It would cost Apple a fortune to ship 79 pounds of equipment to individual homes all over the country, even with corporate discounts. The Verge is obviously far, far smaller than Apple, but it would cost us upwards of $200 just to return those cases to their sender. Yet Apple offers free shipping both directions with your $49 rental, plus a dedicated support team to validate your parts and facilitate returns. (Though, apparently, it doesn’t do the latter anywhere near its Silicon Valley HQ: when I took the support team up on its offer of picking up my battery, they told me they didn’t have a driver within 250 miles of my location, and I should just drop it off at the nearest Home Depot.)

I don’t think Apple expects anyone to seriously take it up on the offer of self-service repair kits. It stacked the deck in favor of taking your phone to an Apple Store, where it can tempt you to buy something new instead. The real victory will come months or years down the road, though. That’s when Apple can tell legislators it tried to give right-to-repair advocates what they wanted — but that consumers overwhelmingly decided Apple knows best.

Read the whole story
1 day ago
Earth, Sol system, Western spiral arm
1 day ago
Dallas, Texas
Share this story

Presocratic Return Policy

1 Comment and 6 Shares
Read the whole story
3 days ago
Earth, Sol system, Western spiral arm
Share this story
1 public comment
28 days ago
The plumage don't enter into it.
Melbourne, Australia

Once Frenemies, Elastic and AWS Are Now Besties

1 Comment
Paul Sawers writes via VentureBeat: It has been a frosty few years for Elastic and Amazon's AWS cloud computing arm, with the duo frequently locking horns over various issues relating to Elastic's ex-open-source database search engine -- Elasticsearch. To cut a War and Peace-esque story short, Amazon had introduced its own managed Elasticsearch service called Amazon Elasticsearch Service way back in 2015, and in the intervening years the "confusion" this (among other shenanigans) caused in the cloud sphere ultimately led Elastic to transition Elasticsearch from open source to "free and open" (i.e., a less permissive license), exerting more control over how the cloud giants of the world could use the product and Elasticsearch name. In response, Amazon launched an Elasticsearch "fork" called OpenSearch, and the two companies finally settled a long-standing trademark dispute, which effectively meant that Amazon would stop associating the Elasticsearch brand with Amazon's own products. This was an important final piece of the kiss-and-make-up puzzle, as it meant that customers searching for Elastic's fully-managed Elasticsearch service (Elastic Cloud) in the AWS Marketplace, wouldn't also stumble upon Amazon's incarnation and wonder which one they were actually looking for.

Fast-forward to today, and you would hardly know that the two companies were once at loggerheads. Over the past year, Elastic and Amazon have partnered to bring all manner of technologies and integrations to market, and they've worked to ensure that their shared customers can more easily onboard to Elastic Cloud within Amazon's infrastructure. Building on a commitment last month to make AWS and Elastic work even better together, Elastic and AWS today announced an even deeper collaboration, to "build, market and deliver" frictionless access to Elastic Cloud on AWS. In essence, this means that the two companies will go full-throttle on their "go-to-market" sales and marketing strategies -- this includes a new free 7-day trial for customers wanting to test-drive Elastic Cloud directly from the AWS Marketplace.

On top of that, AWS has committed to working with Elastic to generate new business across Amazon's various cloud-focused sales organizations -- this is a direct result of Elastic joining the AWS ISV Accelerate program. All of this has been made possible because of the clear and distinct products that now exist -- Amazon has OpenSearch, and Elastic has Elasticsearch, which makes collaboration that much easier.
What does Amazon get for all of this? "Put simply, companies accessing Elastic's services on AWS infrastructure drive a lot of cloud consumption -- which translates into ka-ching for Amazon," adds Sawers.
Read the whole story
3 days ago
enemies with benefits?
Earth, Sol system, Western spiral arm
Share this story

Into each life some rain must fall

1 Comment
Johnny Cash was born in Kingsland, Arkansas, and so the city put put his silhouette on the town water tower. On Monday, a man was arrested for shooting a hole in the water tower.
Read the whole story
3 days ago
Once I shot a man in Kingsland just to watch him leak
Earth, Sol system, Western spiral arm
Share this story

Rust: A Critical Retrospective


Since I was unable to travel for a couple of years during the pandemic, I decided to take my new-found time and really lean into Rust. After writing over 100k lines of Rust code, I think I am starting to get a feel for the language and like every cranky engineer I have developed opinions and because this is the Internet I’m going to share them.

The reason I learned Rust was to flesh out parts of the Xous OS written by Xobs. Xous is a microkernel message-passing OS written in pure Rust. Its closest relative is probably QNX. Xous is written for lightweight (IoT/embedded scale) security-first platforms like Precursor that support an MMU for hardware-enforced, page-level memory protection.

In the past year, we’ve managed to add a lot of features to the OS: networking (TCP/UDP/DNS), middleware graphics abstractions for modals and multi-lingual text, storage (in the form of an encrypted, plausibly deniable database called the PDDB), trusted boot, and a key management library with self-provisioning and sealing properties.

One of the reasons why we decided to write our own OS instead of using an existing implementation such as SeL4, Tock, QNX, or Linux, was we wanted to really understand what every line of code was doing in our device. For Linux in particular, its source code base is so huge and so dynamic that even though it is open source, you can’t possibly audit every line in the kernel. Code changes are happening at a pace faster than any individual can audit. Thus, in addition to being home-grown, Xous is also very narrowly scoped to support just our platform, to keep as much unnecessary complexity out of the kernel as possible.

Being narrowly scoped means we could also take full advantage of having our CPU run in an FPGA. Thus, Xous targets an unusual RV32-IMAC configuration: one with an MMU + AES extensions. It’s 2022 after all, and transistors are cheap: why don’t all our microcontrollers feature page-level memory protection like their desktop counterparts? Being an FPGA also means we have the ability to fix API bugs at the hardware level, leaving the kernel more streamlined and simplified. This was especially relevant in working through abstraction-busting processes like suspend and resume from RAM. But that’s all for another post: this one is about Rust itself, and how it served as a systems programming language for Xous.

Rust: What Was Sold To Me

Back when we started Xous, we had a look at a broad number of systems programming languages and Rust stood out. Even though its `no-std` support was then-nascent, it was a strongly-typed, memory-safe language with good tooling and a burgeoning ecosystem. I’m personally a huge fan of strongly typed languages, and memory safety is good not just for systems programming, it enables optimizers to do a better job of generating code, plus it makes concurrency less scary. I actually wished for Precursor to have a CPU that had hardware support for tagged pointers and memory capabilities, similar to what was done on CHERI, but after some discussions with the team doing CHERI it was apparent they were very focused on making C better and didn’t have the bandwidth to support Rust (although that may be changing). In the grand scheme of things, C needed CHERI much more than Rust needed CHERI, so that’s a fair prioritization of resources. However, I’m a fan of belt-and-suspenders for security, so I’m still hopeful that someday hardware-enforced fat pointers will make their way into Rust.

That being said, I wasn’t going to go back to the C camp simply to kick the tires on a hardware retrofit that backfills just one poor aspect of C. The glossy brochure for Rust also advertised its ability to prevent bugs before they happened through its strict “borrow checker”. Furthermore, its release philosophy is supposed to avoid what I call “the problem with Python”: your code stops working if you don’t actively keep up with the latest version of the language. Also unlike Python, Rust is not inherently unhygienic, in that the advertised way to install packages is not also the wrong way to install packages. Contrast to Python, where the official docs on packages lead you to add them to system environment, only to be scolded by Python elders with a “but of course you should be using a venv/virtualenv/conda/pipenv/…, everyone knows that”. My experience with Python would have been so much better if this detail was not relegated to Chapter 12 of 16 in the official tutorial. Rust is also supposed to be better than e.g. Node at avoiding the “oops I deleted the Internet” problem when someone unpublishes a popular package, at least if you use fully specified semantic versions for your packages.

In the long term, the philosophy behind Xous is that eventually it should “get good enough”, at which point we should stop futzing with it. I believe it is the mission of engineers to eventually engineer themselves out of a job: systems should get stable and solid enough that it “just works”, with no caveats. Any additional engineering beyond that point only adds bugs or bloat. Rust’s philosophy of “stable is forever” and promising to never break backward-compatibility is very well-aligned from the point of view of getting Xous so polished that I’m no longer needed as an engineer, thus enabling me to spend more of my time and focus supporting users and their applications.

The Rough Edges of Rust

There’s already a plethora of love letters to Rust on the Internet, so I’m going to start by enumerating some of the shortcomings I’ve encountered.

“Line Noise” Syntax

This is a superficial complaint, but I found Rust syntax to be dense, heavy, and difficult to read, like trying to read the output of a UART with line noise:

Trying::to_read::<&'a heavy>(syntax, |like| { this. can_be( maddening ) }).map(|_| ())?;

In more plain terms, the line above does something like invoke a method called “to_read” on the object (actually `struct`) “Trying” with a type annotation of “&heavy” and a lifetime of ‘a with the parameters of “syntax” and a closure taking a generic argument of “like” calling the can_be() method on another instance of a structure named “this” with the parameter “maddening” with any non-error return values mapped to the Rust unit type “()” and errors unwrapped and kicked back up to the caller’s scope.

Deep breath. Surely, I got some of this wrong, but you get the idea of how dense this syntax can be.

And then on top of that you can layer macros and directives which don’t have to follow other Rust syntax rules. For example, if you want to have conditionally compiled code, you use a directive like

#[cfg(all(not(baremetal), any(feature = “hazmat”, feature = “debug_print”)))]

Which says if either the feature “hazmat” or “debug_print” is enabled and you’re not running on bare metal, use the block of code below (and I surely got this wrong too). The most confusing part of about this syntax to me is the use of a single “=” to denote equivalence and not assignment, because, stuff in config directives aren’t Rust code. It’s like a whole separate meta-language with a dictionary of key/value pairs that you query.

I’m not even going to get into the unreadability of Rust macros – even after having written a few Rust macros myself, I have to admit that I feel like they “just barely work” and probably thar be dragons somewhere in them. This isn’t how you’re supposed to feel in a language that bills itself to be reliable. Yes, it is my fault for not being smart enough to parse the language’s syntax, but also, I do have other things to do with my life, like build hardware.

Anyways, this is a superficial complaint. As time passed I eventually got over the learning curve and became more comfortable with it, but it was a hard, steep curve to climb. This is in part because all the Rust documentation is either written in eli5 style (good luck figuring out “feature”s from that example), or you’re greeted with a formal syntax definition (technically, everything you need to know to define a “feature” is in there, but nowhere is it summarized in plain English), and nothing in between.

To be clear, I have a lot of sympathy for how hard it is to write good documentation, so this is not a dig at the people who worked so hard to write so much excellent documentation on the language. I genuinely appreciate the general quality and fecundity of the documentation ecosystem.

Rust just has a steep learning curve in terms of syntax (at least for me).

Rust Is Powerful, but It Is Not Simple

Rust is powerful. I appreciate that it has a standard library which features HashMaps, Vecs, and Threads. These data structures are delicious and addictive. Once we got `std` support in Xous, there was no going back. Coming from a background of C and assembly, Rust’s standard library feels rich and usable — I have read some criticisms that it lacks features, but for my purposes it really hits a sweet spot.

That being said, my addiction to the Rust `std` library has not done any favors in terms of building an auditable code base. One of the criticisms I used to leverage at Linux is like “holy cow, the kernel source includes things like an implementation for red black trees, how is anyone going to audit that”.

Now, having written an OS, I have a deep appreciation for how essential these rich, dynamic data structures are. However, the fact that Xous doesn’t include an implementation of HashMap within its repository doesn’t mean that we are any simpler than Linux: indeed, we have just swept a huge pile of code under the rug; just the `collection`s portion of the standard library represents about 10k+ SLOC at a very high complexity.

So, while Rust’s `std` library allows the Xous code base to focus on being a kernel and not also be its own standard library, from the standpoint of building a minimum attack-surface, “fully-auditable by one human” codebase, I think our reliance on Rust’s `std` library means we fail on that objective, especially so long as we continue to track the latest release of Rust (and I’ll get into why we have to in the next section).

Ideally, at some point, things “settle down” enough that we can stick a fork in it and call it done by well, forking the Rust repo, and saying “this is our attack surface, and we’re not going to change it”. Even then, the Rust `std` repo dwarfs the Xous repo by several multiples in size, and that’s not counting the complexity of the compiler itself.

Rust Isn’t Finished

This next point dovetails into why Rust is not yet suitable for a fully auditable kernel: the language isn’t finished. For example, while we were coding Xous, a thing called `const generic` was introduced. Before this, Rust had no native ability to deal with arrays bigger than 32 elements! This limitation is a bit maddening, and even today there are shortcomings such as the `Default` trait being unable to initialize arrays larger than 32 elements. This friction lead us to put limits on many things at 32 elements: for example, when we pass the results of an SSID scan between processes, the structure only reserves space for up to 32 results, because the friction of going to a larger, more generic structure just isn’t worth it. That’s a language-level limitation directly driving a user-facing feature.

Also over the course of writing Xous, things like in-line assembly and workspaces finally reached maturity, which means we need to go back a revisit some unholy things we did to make those critical few lines of initial boot code, written in assembly, integrated into our build system.

I often ask myself “when is the point we’ll get off the Rust release train”, and the answer I think is when they finally make “alloc” no longer a nightly API. At the moment, `no-std` targets have no access to the heap, unless they hop on the “nightly” train, in which case you’re back into the Python-esque nightmare of your code routinely breaking with language releases.

We definitely gave writing an OS in `no-std` + stable a fair shake. The first year of Xous development was all done using `no-std`, at a cost in memory space and complexity. It’s possible to write an OS with nothing but pre-allocated, statically sized data structures, but we had to accommodate the worst-case number of elements in all situations, leading to bloat. Plus, we had to roll a lot of our own core data structures.

About a year ago, that all changed when Xobs ported Rust’s `std` library to Xous. This means we are able to access the heap in stable Rust, but it comes at a price: now Xous is tied to a particular version of Rust, because each version of Rust has its own unique version of `std` packaged with it. This version tie is for a good reason: `std` is where the sausage gets made of turning fundamentally `unsafe` hardware constructions such as memory allocation and thread creation into “safe” Rust structures. (Also fun fact I recently learned: Rust doesn’t have a native allocater for most targets – it simply punts to the native libc `malloc()` and `free()` functions!) In other words, Rust is able to make a strong guarantee about the stable release train not breaking old features in part because of all the loose ends swept into `std`.

I have to keep reminding myself that having `std` doesn’t eliminate the risk of severe security bugs in critical code – it merely shuffles a lot of critical code out of sight, into a standard library. Yes, it is maintained by a talented group of dedicated programmers who are smarter than me, but in the end, we are all only human, and we are all fair targets for software supply chain exploits.

Rust has a clockwork release schedule – every six weeks, it pushes a new version. And because our fork of `std` is tied to a particular version of Rust, it means every six weeks, Xobs has the thankless task of updating our fork and building a new `std` release for it (we’re not a first-class platform in Rust, which means we have to maintain our own `std` library). This means we likewise force all Xous developers to run `rustup update` on their toolchains so we can retain compatibility with the language.

This probably isn’t sustainable. Eventually, we need to lock down the code base, but I don’t have a clear exit strategy for this. Maybe the next point at which we can consider going back to `nostd` is when we can get the stable `alloc` feature, which allows us to have access to the heap again. We could then decouple Xous from the Rust release train, but we’d still need to backfill features such as Vec, HashMap, Thread, and Arc/Mutex/Rc/RefCell/Box constructs that enable Xous to be efficiently coded.

Unfortunately, the `alloc` crate is very hard, and has been in development for many years now. That being said, I really appreciate the transparency of Rust behind the development of this feature, and the hard work and thoughtfulness that is being put into stabilizing this feature.

Rust Has A Limited View of Supply Chain Security

I think this position is summarized well by the installation method recommended by the rustup.rs installation page:

`curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`

“Hi, run this shell script from a random server on your machine.”

To be fair, you can download the script and inspect it before you run it, which is much better than e.g. the Windows .MSI installers for vscode. However, this practice pervades the entire build ecosystem: a stub of code called `build.rs` is potentially compiled and executed whenever you pull in a new crate from crates.io. This, along with “loose” version pinning (you can specify a version to be, for example, simply “2” which means you’ll grab whatever the latest version published is with a major rev of 2), makes me uneasy about the possibility of software supply chain attacks launched through the crates.io ecosystem.

Crates.io is also subject to a kind of typo-squatting, where it’s hard to determine which crates are “good” or “bad”; some crates that are named exactly what you want turn out to just be old or abandoned early attempts at giving you the functionality you wanted, and the more popular, actively-maintained crates have to take on less intuitive names, sometimes differing by just a character or two from others (to be fair, this is not a problem unique to Rust’s package management system).

There’s also the fact that dependencies are chained – when you pull in one thing from crates.io, you also pull in all of that crate’s subordinate dependencies, along with all their build.rs scripts that will eventually get run on your machine. Thus, it is not sufficient to simply audit the crates explicitly specified within your Cargo.toml file — you must also audit all of the dependent crates for potential supply chain attacks as well.

Fortunately, Rust does allow you to pin a crate at a particular version using the `Cargo.lock` file, and you can fully specify a dependent crate down to the minor revision. We try to mitigate this in Xous by having a policy of publishing our Cargo.lock file and specifying all of our first-order dependent crates to the minor revision. We have also vendored in or forked certain crates that would otherwise grow our dependency tree without much benefit.

That being said, much of our debug and test framework relies on some rather fancy and complicated crates that pull in a huge number of dependencies, and much to my chagrin even when I try to run a build just for our target hardware, the dependent crates for running simulations on the host computer are still pulled in and the build.rs scripts are at least built, if not run.

In response to this, I wrote a small tool called `crate-scraper` which downloads the source package for every source specified in our Cargo.toml file, and stores them locally so we can have a snapshot of the code used to build a Xous release. It also runs a quick “analysis” in that it searches for files called build.rs and collates them into a single file so I can more quickly grep through to look for obvious problems. Of course, manual review isn’t a practical way to detect cleverly disguised malware embedded within the build.rs files, but it at least gives me a sense of the scale of the attack surface we’re dealing with — and it is breathtaking, about 5700 lines of code from various third parties that manipulates files, directories, and environment variables, and runs other programs on my machine every time I do a build.

I’m not sure if there is even a good solution to this problem, but, if you are super-paranoid and your goal is to be able to build trustable firmware, be wary of Rust’s expansive software supply chain attack surface!

You Can’t Reproduce Someone Else’s Rust Build

A final nit I have about Rust is that builds are not reproducible between different computers (they are at least reproducible between builds on the same machine if we disable the embedded timestamp that I put into Xous for $reasons).

I think this is primarily because Rust pulls in the full path to the source code as part of the panic and debug strings that are built into the binary. This has lead to uncomfortable situations where we have had builds that worked on Windows, but failed under Linux, because our path names are very different lengths on the two and it would cause some memory objects to be shifted around in target memory. To be fair, those failures were all due to bugs we had in Xous, which have since been fixed. But, it just doesn’t feel good to know that we’re eventually going to have users who report bugs to us that we can’t reproduce because they have a different path on their build system compared to ours. It’s also a problem for users who want to audit our releases by building their own version and comparing the hashes against ours.

There’s some bugs open with the Rust maintainers to address reproducible builds, but with the number of issues they have to deal with in the language, I am not optimistic that this problem will be resolved anytime soon. Assuming the only driver of the unreproducibility is the inclusion of OS paths in the binary, one fix to this would be to re-configure our build system to run in some sort of a chroot environment or a virtual machine that fixes the paths in a way that almost anyone else could reproduce. I say “almost anyone else” because this fix would be OS-dependent, so we’d be able to get reproducible builds under, for example, Linux, but it would not help Windows users where chroot environments are not a thing.

Where Rust Exceeded Expectations

Despite all the gripes laid out here, I think if I had to do it all over again, Rust would still be a very strong contender for the language I’d use for Xous. I’ve done major projects in C, Python, and Java, and all of them eventually suffer from “creeping technical debt” (there’s probably a software engineer term for this, I just don’t know it). The problem often starts with some data structure that I couldn’t quite get right on the first pass, because I didn’t yet know how the system would come together; so in order to figure out how the system comes together, I’d cobble together some code using a half-baked data structure.

Thus begins the descent into chaos: once I get an idea of how things work, I go back and revise the data structure, but now something breaks elsewhere that was unsuspected and subtle. Maybe it’s an off-by-one problem, or the polarity of a sign seems reversed. Maybe it’s a slight race condition that’s hard to tease out. Nevermind, I can patch over this by changing a <= to a <, or fixing the sign, or adding a lock: I’m still fleshing out the system and getting an idea of the entire structure. Eventually, these little hacks tend to metastasize into a cancer that reaches into every dependent module because the whole reason things even worked was because of the “cheat”; when I go back to excise the hack, I eventually conclude it’s not worth the effort and so the next best option is to burn the whole thing down and rewrite it…but unfortunately, we’re already behind schedule and over budget so the re-write never happens, and the hack lives on.

Rust is a difficult language for authoring code because it makes these “cheats” hard – as long as you have the discipline of not using “unsafe” constructions to make cheats easy. However, really hard does not mean impossible – there were definitely some cheats that got swept under the rug during the construction of Xous.

This is where Rust really exceeded expectations for me. The language’s structure and tooling was very good at hunting down these cheats and refactoring the code base, thus curing the cancer without killing the patient, so to speak. This is the point at which Rust’s very strict typing and borrow checker converts from a productivity liability into a productivity asset.

I liken it to replacing a cable in a complicated bundle of cables that runs across a building. In Rust, it’s guaranteed that every strand of wire in a cable chase, no matter how complicated and awful the bundle becomes, is separable and clearly labeled on both ends. Thus, you can always “pull on one end” and see where the other ends are by changing the type of an element in a structure, or the return type of a method. In less strictly typed languages, you don’t get this property; the cables are allowed to merge and affect each other somewhere inside the cable chase, so you’re left “buzzing out” each cable with manual tests after making a change. Even then, you’re never quite sure if the thing you replaced is going to lead to the coffee maker switching off when someone turns on the bathroom lights.

Here’s a direct example of Rust’s refactoring abilities in action in the context of Xous. I had a problem in the way trust levels are handled inside our graphics subsystem, which I call the GAM (Graphical Abstraction Manager). Each Canvas in the system gets a `u8` assigned to it that is a trust level. When I started writing the GAM, I just knew that I wanted some notion of trustability of a Canvas, so I added the variable, but wasn’t quite sure exactly how it would be used. Months later, the system grew the notion of Contexts with Layouts, which are multi-Canvas constructions that define a particular type of interaction. Now, you can have multiple trust levels associated with a single Context, but I had forgotten about the trust variable I had previously put in the Canvas structure – and added another trust level number to the Context structure as well. You can see where this is going: everything kind of worked as long as I had simple test cases, but as we started to get modals popping up over applications and then menus on top of modals and so forth, crazy behavior started manifesting, because I had confused myself over where the trust values were being stored. Sometimes I was updating the value in the Context, sometimes I was updating the one in the Canvas. It would manifest itself sometimes as an off-by-one bug, other times as a concurrency error.

This was always a skeleton in the closet that bothered me while the GAM grew into a 5k-line monstrosity of code with many moving parts. Finally, I decided something had to be done about it, and I was really not looking forward to it. I was assuming that I messed up something terribly, and this investigation was going to conclude with a rewrite of the whole module.

Fortunately, Rust left me a tiny string to pull on. Clippy, the cheerfully named “linter” built into Rust, was throwing a warning that the trust level variable was not being used at a point where I thought it should be – I was storing it in the Context after it was created, but nobody every referred to it after then. That’s strange – it should be necessary for every redraw of the Context! So, I started by removing the variable, and seeing what broke. This rapidly lead me to recall that I was also storing the trust level inside the Canvases within the Context when they were being created, which is why I had this dangling reference. Once I had that clue, I was able to refactor the trust computations to refer only to that one source of ground truth. This also lead me to discover other bugs that had been lurking because in fact I was never exercising some code paths that I thought I was using on a routine basis. After just a couple hours of poking around, I had a clear-headed view of how this was all working, and I had refactored the trust computation system with tidy APIs that were simple and easier to understand, without having to toss the entire code base.

This is just one of many positive experiences I’ve had with Rust in maintaining the Xous code base. It’s one of the first times I’ve walked into a big release with my head up and a positive attitude, because for the first time ever, I feel like maybe I have a chance of being able deal with hard bugs in an honest fashion. I’m spending less time making excuses in my head to justify why things were done this way and why we can’t take that pull request, and more time thinking about all the ways things can get better, because I know Clippy has my back.

Caveat Coder

Anyways, that’s a lot of ranting about software for a hardware guy. Software people are quick to remind me that first and foremost, I make circuits and aluminum cases, not code, therefore I have no place ranting about software. They’re right – I actually have no “formal” training to write code “the right way”. When I was in college, I learned Maxwell’s equations, not algorithms. I could never be a professional programmer, because I couldn’t pass even the simplest coding interview. Don’t ask me to write a linked list: I already know that I don’t know how to do it correctly; you don’t need to prove that to me. This is because whenever I find myself writing a linked list (or any other foundational data structure for that matter), I immediately stop myself and question all the life choices that brought me to that point: isn’t this what libraries are for? Do I really need to be re-inventing the wheel? If there is any correlation between doing well in a coding interview and actual coding ability, then you should definitely take my opinions with the grain of salt.

Still, after spending a couple years in the foxhole with Rust and reading countless glowing articles about the language, I felt like maybe a post that shared some critical perspectives about the language would be a refreshing change of pace.

Read the whole story
21 hours ago
Dallas, Texas
3 days ago
Washington, DC
3 days ago
Earth, Sol system, Western spiral arm
Share this story

executeness:chokolattejedi:irrelevantlyvalid:nickyandmikey:nickyandmikey:when tw...







when two musicians sing into the same microphone and lean in very close to each other… like omg are you guys gonna kiss now to relieve the homoerotic tension?😳


op is the only valid person i’ve ever met. everyone else needs to come to the light

Okay, but this is really important: Bruce Springsteen occupied this really weird place in music history. His songs were all from this pessimistic, nihilistic view of an America that had let him down:

Just like the anti-Vietnam War protest songs that we associate with the 1960s, or the early nihilism that spawned punk music in the 1970s. But he didn’t *sound* like a punk anarchist; he sounded like a country rock singer. When he released Born in the U.S.A. people completely misinterpreted (or possibly ignored) the lyrics in favor of the tone of the music.

Politicians used his music to promote their ‘Murica Yes! brand, and he had to literally explain that that was not what he was about. He’s over here asking when we’re going to have jobs and heathcare, not stanning the politicians who weren’t helping the people.

It was also kind of a big deal that he had an integrated band, because even as late as the 1980s music was still kind of segregated and MTV was straight up racist. They refused to play and promote black artists and then claimed that were no black artists in the first place. Michael Jackson’s record company had to threaten a boycott of their white artists to get MTV to play his Thriller video.

Plus, the first black/white interracial kiss on TV was in 1968 (OG Star Trek). Also it took us until the 70s to get sympathetic gay characters on screen, and the 90s to get gay characters to kiss onscreen. And all of those firsts were met with outrage.

So keep that in mind when you see Bruce Springsteen not just playing with an interracial band, but engaging in an interracial, gay kiss on stage repeatedly.

Passages from American Popular Music by Larry Starr and Christopher Waterman

I used to think that Bruce and Clarence kissing onstage was exuberance, showmanship, and telling racist homophobes to fuck off. Like, they picked up a certain kind of audience and went “Racist homophobes? Not in our house!” And started the kissing then but then I actually looked it up and


It was a story where… we remade the city. We remade the city, shaping it into the kind of place where our friendship and our love for one another wouldn’t have been such an exceptional thing. - Bruce Springsteen

It wasn’t about showmanship or rejecting bigots or anything it was just. Damn right that was one of the loves of his life and damn right he was going to kiss him onstage

It gets me a little that Bruce has had a divorce, that he’s been married twice, but he loved Clarence for the rest of Clarence’s life and will presumably love him the rest of his own

Clemons said in one interview. “Bruce and I looked at each other and didn’t say anything, we just knew. We knew we were the missing links in each other’s lives. He was what I’d been searching for.” In another version of the story, Clemons says “He looked at me, and I looked at him, and we fell in love.”

I’m having some emotions about it!

“He was elemental in my life,“ Springsteen adds, “and losing him was like losing the rain.”

Not just! I love you pure and deep and true but! I am going to love you like that in front of the whole damn world!

We have fewer narratives about taking risks and making statements for platonic love rather than romantic and supposedly it would be easier to downplay this onstage than romance and! They refused! They fucking refused! In front of hundreds of thousands of people, over the course of years! In the spotlight, in word and deed, I love you!

God I’m not okay about it

Read the whole story
9 days ago
Earth, Sol system, Western spiral arm
Share this story
Next Page of Stories