Reflections on Founder Mode The Observation Deck

Paul Graham’s Founder Mode is an important piece, and you should read it if for no other reason that “founder mode” will surely enter the lexicon (and as Graham grimly predicts: “as soon as the concept of founder mode becomes established, people will start misusing it”). When building a company, founders are engaged in several different acts at once: raising capital; building a product; connecting that product to a market; building an organization to do all of these. Founders make lots of mistakes in all of these activities, and Graham’s essay highlights a particular kind of mistake in which founders are overly deferential to expertise or convention. Pejoratively referring to this as “Management Mode”, Graham frames this in the Silicon Valley dramaturgical dyad of Steve Jobs and John Scully. While that’s a little too reductive (anyone seeking to understand Jobs needs to read Randall Stross’s superlative Steve Jobs and the NeXT Big Thing, highlighting Jobs’s many post-Scully failures at NeXT), Graham has identified a real issue here, albeit without much specificity.

For a treatment of the same themes but with much more supporting detail, one should read the (decade-old) piece from Tim O’Reilly, How I failed. (Speaking personally, O’Reilly’s piece had a profound influence on me, as it encouraged me to stand my ground on an issue on which I had my own beliefs but was being told to defer to convention.) But as terrific as it is, O’Reilly’s piece also doesn’t answer the question that Graham poses: how do founders prevent their companies from losing their way?

Graham says that founder mode is a complete mystery (“There are as far as I know no books specifically about founder mode”), and while there is a danger in being too pat or prescriptive, there does seem to be a clear component for keeping companies true to themselves: the written word. That is, a writing- (and reading-!) intensive company culture does, in fact, allow for scaling the kind of responsibility that Graham thinks of as founder mode. At Oxide, our writing-intensive culture has been absolutely essential: our RFD process is the backbone of Oxide, and has given us the structure to formalize, share, and refine our thinking. First among this formalized thinking – and captured in our first real RFD – is RFD 2 Mission, Principles, and Values. Immediately behind that (and frankly, the most important process for any company) is RFD 3 Oxide Hiring Process. These first three RFDs – on the process itself, on what we value, and on how we hire – were written in the earliest days of the company, and they have proven essential to scale the company: they are the foundation upon which we attract people who share our values.

While the shared values have proven necessary, they haven’t been sufficient to eliminate the kind of quandaries that Graham and O’Reilly describe. For example, there have been some who have told us that we can’t possibly hire non-engineering roles using our hiring process – or told us that our approach to compensation can’t possibly work. To the degree that we have had a need for Graham’s founder mode, it has been in those moments: to stay true to the course we have set for the company. But because we have written down so much, there is less occasion for this than one might think. And when it does occur – when there is a need for further elucidation or clarification – the artifact is not infrequently a new RFD that formalizes our newly extended thinking. (RFD 68 is an early public and concrete example of this; RFD 508 is a much more recent one that garnered some attention.)

Most importantly, because we have used our values as a clear lens for hiring, we are able to assure that everyone at Oxide is able to have the same disposition with respect to responsibility – and this (coupled with the transparency that the written word allows) permits us to trust one another. As I elucidated in Things I Learned The Hard Way, the most important quality in a leader is to bind a team with mutual trust: with it, all things are possible – and without it, even easy things can be debilitatingly difficult. Graham mentions trust, but he doesn’t give it its due. Too often, founders focus on the immediacy of a current challenge without realizing that they are, in fact, undermining trust with their approach. Bluntly, founders are at grave risk of misinterpreting Graham’s “Founders Mode” to be a license to micromanage their teams, descending into the kind of manic seagull management that inhibits a team rather than empowering it.

Founders seeking to internalize Graham’s advice should recast it by asking themselves how they can foster mutual trust – and how they can build the systems that allow trust to be strengthened even as the team expands. For us at Oxide, writing is the foundation upon which we build that trust. Others may land on different mechanisms, but the goal of founders should be the same: build the trust that allows a team to kick a Jobsian dent in the universe!

Reflections on Founder Mode Oxide Computer Company Blog

Paul Graham’s Founder Mode is an important piece, and you should read it if for no other reason that "founder mode" will surely enter the lexicon (and as Graham grimly predicts: "as soon as the concept of founder mode becomes established, people will start misusing it"). When building a company, founders are engaged in several different acts at once: raising capital; building a product; connecting that product to a market; building an organization to do all of these. Founders make lots of mistakes in all of these activities, and Graham’s essay highlights a particular kind of mistake in which founders are overly deferential to expertise or convention. Pejoratively referring to this as "Management Mode", Graham frames this in the Silicon Valley dramaturgical dyad of Steve Jobs and John Scully. While that’s a little too reductive (anyone seeking to understand Jobs needs to read Randall Stross’s superlative Steve Jobs and the NeXT Big Thing, highlighting Jobs’s many post-Scully failures at NeXT), Graham has identified a real issue here, albeit without much specificity.

For a treatment of the same themes but with much more supporting detail, one should read the (decade-old) piece from Tim O’Reilly, How I failed. (Speaking personally, O’Reilly’s piece had a profound influence on me, as it encouraged me to stand my ground on an issue on which I had my own beliefs but was being told to defer to convention.) But as terrific as it is, O’Reilly’s piece also doesn’t answer the question that Graham poses: how do founders prevent their companies from losing their way?

Graham says that founder mode is a complete mystery ("There are as far as I know no books specifically about founder mode"), and while there is a danger in being too pat or prescriptive, there does seem to be a clear component for keeping companies true to themselves: the written word. That is, a writing- (and reading-!) intensive company culture does, in fact, allow for scaling the kind of responsibility that Graham thinks of as founder mode. At Oxide, our writing-intensive culture has been absolutely essential: our RFD process is the backbone of Oxide, and has given us the structure to formalize, share, and refine our thinking. First among this formalized thinking – and captured in our first real RFD – is RFD 2 Mission, Principles, and Values. Immediately behind that (and frankly, the most important process for any company) is RFD 3 Oxide Hiring Process. These first three RFDs – on the process itself, on what we value, and on how we hire – were written in the earliest days of the company, and they have proven essential to scale the company: they are the foundation upon which we attract people who share our values.

While the shared values have proven necessary, they haven’t been sufficient to eliminate the kind of quandaries that Graham and O’Reilly describe. For example, there have been some who have told us that we can’t possibly hire non-engineering roles using our hiring process – or told us that our approach to compensation can’t possibly work. To the degree that we have had a need for Graham’s founder mode, it has been in those moments: to stay true to the course we have set for the company. But because we have written down so much, there is less occasion for this than one might think. And when it does occur – when there is a need for further elucidation or clarification – the artifact is not infrequently a new RFD that formalizes our newly extended thinking. (RFD 68 is an early public and concrete example of this; RFD 508 is a much more recent one that garnered some attention.)

Most importantly, because we have used our values as a clear lens for hiring, we are able to assure that everyone at Oxide is able to have the same disposition with respect to responsibility – and this (coupled with the transparency that the written word allows) permits us to trust one another. As I elucidated in Things I Learned The Hard Way, the most important quality in a leader is to bind a team with mutual trust: with it, all things are possible – and without it, even easy things can be debilitatingly difficult. Graham mentions trust, but he doesn’t give it its due. Too often, founders focus on the immediacy of a current challenge without realizing that they are, in fact, undermining trust with their approach. Bluntly, founders are at grave risk of misinterpreting Graham’s "Founders Mode" to be a license to micromanage their teams, descending into the kind of manic seagull management that inhibits a team rather than empowering it.

Founders seeking to internalize Graham’s advice should recast it by asking themselves how they can foster mutual trust – and how they can build the systems that allow trust to be strengthened even as the team expands. For us at Oxide, writing is the foundation upon which we build that trust. Others may land on different mechanisms, but the goal of founders should be the same: build the trust that allows a team to kick a Jobsian dent in the universe!

KORH Minimum Sector Altitude Gotcha Josef "Jeff" Sipek

I had this draft around for over 5 years—since January 2019. Since I still think it is about an interesting observation, I’m publishing it now.

In late December (2018), I was preparing for my next instrument rating lesson which was going to involve a couple of ILS approaches at Worcester, MA (KORH). While looking over the ILS approach to runway 29, I noticed something about the minimum sector altitude that surprised me.

Normally, I consider MSAs to be centered near the airport for the approach. For conventional (i.e., non-RNAV) approaches, this tends to be the main navaid used during the approach. At Worcester, the 25 nautical mile MSA is centered on the Gardner VOR which is 19 nm away.

I plotted the MSA boundary on the approach chart to visualize it better:

It is easy to glance at the chart, see 3300 most of the way around, but not realize that when flying in the vicinity of the airport we are near the edge of the MSA. GRIPE, the missed approach hold fix, is half a mile outside of the MSA. (Following the missed approach procedure will result in plenty of safety, of course, so this isn’t really that relevant.)

What's a decent password length? The Trouble with Tribbles...

What's a decent length for a password?

I think it's pretty much agreed by now that longer passwords are, in general, better. And fortunately stupid complexity requirements are on the way out.

Reading the NIST password rules gives the following:

  • User chosen passwords must be at least 8 characters
  • Machine chosen passwords must be at least 6 characters
  • You must allow passwords to be at least 64 characters

Say what? A 6 character password is secure?

Initially, that seems way off, but it depends on your threat model. If you have a mechanism to block the really bad commonly used passwords, then 6 characters gives you a billion choices. Not many, but you should also be implementing technical measures such as rate limiting.

With that, if the only attack vector is brute force over the network, trying a billion passwords is simply impractical. Even with just passive rate limiting (limited by cpu power and network latency) an attacker will struggle; with active limiting they'll be trying for decades.

That's with just 6 random characters. Go to 8 and you're out of sight. And for this attack vector, no quantum computing developments will make any difference whatsoever.

But what if the user database itself is compromised?

Of course, if the passwords are in cleartext then no amount of fancy rules or length requirements is going to help you at all.

But if an attacker gets encrypted passwords then they can simply brute force them many orders of magnitude faster. Or use rainbow tables. And that's a whole different threat model.

Realistically, protecting against brute force or rainbow table attacks probably needs a 16 character password (or passphrase), and that requirement could get longer over time.

A corollary to this is that there isn't actually much to be gained to requiring password lengths between 8 and 16 characters.

In illumos, the default minimum password length is 6 characters. I recently increased the default in Tribblix to 8, which aligns with the user chosen limit that NIST give.

OmniOS Community Edition r151050 OmniOS Community Edition

OmniOSce v11 r151050 is out!

On the 6th of May 2024, the OmniOSce Association has released a new stable version of OmniOS - The Open Source Enterprise Server OS. The release comes with many tool updates, brand-new features and additional hardware support. For details see the release notes.

Note that r151038 is now end-of-life. You should upgrade to r151046 or r151050 to stay on a supported track. r151046 is an LTS release with support until May 2026, and r151050 is a stable release with support until May 2025.

For anyone who tracks LTS releases, the previous LTS - r151038 - is now end-of-life. You should upgrade to r151046 for continued LTS support.

OmniOS is fully Open Source and free. Nevertheless, it takes a lot of time and money to keep maintaining a full-blown operating system distribution. Our statistics show that there are almost 2’000 active installations of OmniOS while fewer than 20 people send regular contributions. If your organisation uses OmniOS based servers, please consider becoming a regular patron or taking out a support contract.


Any problems or questions, please get in touch.

Unsynchronized PPS Experiment Josef "Jeff" Sipek

Late last summer I decided to do a simple experiment—feed my server a PPS signal that wasn’t synchronized to any timescale. The idea was to give chrony a reference that is more stable than the crystal oscillator on the motherboard.

Hardware

For this PPS experiment I decided to avoid all control loop/feedback complexity and just manually set the frequency to something close enough and let it drift—hence the unsynchronized. As a result, the circuit was quite simple:

The OCXO was a $5 used part from eBay. It outputs a 10 MHz square wave and has a control voltage pin that lets you tweak the frequency a little bit. By playing with it, I determined that a 10mV control voltage change yielded about 0.1 Hz frequency change. The trimmer sets this reference voltage. To “calibrate” it, I connected it to a frequency counter and tweaked the trimmer until a frequency counter read exactly 10 MHz.

10 MHz is obviously way too fast for a PPS signal. The simplest way to turn it into a PPS signal is to use an 8-bit microcontroller. The ATmega48P’s design seems to have very deterministic timing (in other words it adds a negligible amount of jitter), so I used it at 10 MHz (fed directly from the OCXO) with a very simple assembly program to toggle an output pin on and off. The program kept an output pin high for exactly 2 million cycles, and low for 8 million cycles thereby creating a 20% duty cycle square wave at 1 Hz…perfect to use as a PPS. Since the jitter added by the microcontroller is measured in picoseconds it didn’t affect the overall performance in any meaningful way.

The ATmega48P likes to run at 5V and therefore its PPS output is +5V/0V, which isn’t compatible with a PC serial port. I happened to have an ADM3202 on hand so I used it to convert the 5V signal to an RS-232 compatible signal. I didn’t do as thorough of a check of its jitter characteristics, but I didn’t notice anything bad while testing the circuit before “deploying” it.

Finally, I connected the RS-232 compatible signal to the DCD pin (but CTS would have worked too).

The whole circuit was constructed on a breadboard with the OCXO floating in the air on its wires. Power was supplied with an iPhone 5V USB power supply. Overall, it was a very quick and dirty construction to see how well it would work.

Software

My server runs FreeBSD with chrony as the NTP daemon. The configuration is really simple.

First, setting dev.uart.0.pps_mode to 2 informs the kernel that the PPS signal is on DCD (see uart(4)).

Second, we need to tell chrony that there is a local PPS on the port:


refclock PPS /dev/cuau0 local 

The local token is important. It tells chrony that the PPS is not synchronized to UTC. In other words, that the PPS can be used as a 1 Hz frequency source but not as a phase source.

Performance

I ran my server with this PPS refclock for about 50 days with chrony configured to log the time offset of each pulse and to apply filtering to every 16 pulses. (This removes some of the errors related to serial port interrupt handling not being instantaneous.) The following evaluation uses only these filtered samples as well as the logged data about the calculated system time error.

In addition to the PPS, chrony used several NTP servers from the internet (including the surprisingly good time.cloudflare.com) for the date and time-of-day information. This is a somewhat unfortunate situation when it comes to trying to figure out how good of an oscillator the OCXO is, as to make good conclusions about one oscillator one needs a better quality oscillator for the comparison. However, there are still a few things one can look at even when the (likely) best oscillator is the one being tested.

NTP Time Offset

The ultimate goal of a PPS source is to stabilize the system’s clock. Did the PPS source help? I think it is easy to answer that question by looking at the remaining time offset (column 11 in chrony’s tracking.log) over time.

This is a plot of 125 days that include the 50 days when I had the PPS circuit running. You can probably guess which 50 days. (The x-axis is time expressed as Wikipedia article:  Modified Julian Date, or MJD for short.)

I don’t really have anything to say aside from—wow, what a difference!

For completeness, here’s a plot of the estimated local offset at the epoch (column 7 in tracking.log). My understanding of the difference between the two columns is fuzzy but regardless of which I go by, the improvement was significant.

Fitting a Polynomial Model

In addition to looking at the whole-system performance, I wanted to look at the PPS performance itself.

As before, the x-axis is MJD. The y-axis is the PPS offset as measured and logged by chrony—the 16-second filtered values.

The offset started at -486.5168ms. This is an arbitrary offset that simply shows that I started the PPS circuit about half a second off of UTC. Over the approximately 50 days, the offset grew to -584.7671ms.

This means that the OCXO frequency wasn’t exactly 10 MHz (and therefore the 1 PPS wasn’t actually at 1 Hz). Since there is a visible curve to the line, it isn’t a simple fixed frequency error but rather the frequency drifted during the experiment.

How much? I used Wikipedia article:  R’s lm function to fit simple polynomials to the collected data. I tried a few different polynomial degrees, but all of them were fitted the same way:


m <- lm(pps_offset ~ poly(time, poly_degree, raw=TRUE))
a <- as.numeric(m$coefficients[1])
b <- as.numeric(m$coefficients[2])
c <- as.numeric(m$coefficients[3])
d <- as.numeric(m$coefficients[4])

In all cases, these coefficients correspond to the 4 terms in a + bt + ct 2 + dt 3 . For lower-degree polynomials, the missing coefficients are 0.

Note: Even though the plots show the x-axis in MJD, the calculations were done in seconds with the first data point at t=0 seconds.

Linear

The simplest model is a linear one. In other words, fitting a straight line through the data set. lm provided the following coefficients:

a=-0.480090626569894
b=-2.25787872135774e-08

That is an offset of -480.09ms and slope of -22.58ns/s (which is also -22.58 ppb frequency error).

Graphically, this is what the line looks like when overlayed on the measured data:

Not bad but also not great. Here is the difference between the two:

Put another way, this is the PPS offset from UTC if we correct for time offset (a) and a frequency error (b). The linear model clearly doesn’t handle the structure in the data completely. The residual is near low-single-digit milliseconds. We can do better, so let’s try to add another term.

Quadratic

lm produced these coefficients for a degree 2 polynomial:

a=-0.484064700277606
b=-1.75349684277379e-08
c=-1.10412099841665e-15

Visually, this fits the data much better. It’s a little wrong on the ends, but overall quite nice. Even the residual (below) is smaller—almost completely confined to less than 1 millisecond.

a is still time offset, b is still frequency error, and c is a time “acceleration” of sorts.

There is still very visible structure to the residual, so let’s add yet another term.

Cubic

As before, lm yielded the coefficients. This time they were:

a=-0.485357232306569
b=-1.44068934233748e-08
c=-2.78676248986831e-15
d=2.45563844387287e-22

That’s really close looking!

The residual still has a little bit of a wave to it, but almost all the data points are within 500 microseconds. I think that’s sufficiently close given just how much non-deterministic “stuff” (both hardware and software) there is between a serial port and an OS kernel’s interrupt handler on a modern server. (In theory, we could add additional terms forever until we completely eliminated the residual.)

So, we have a model of what happened to the PPS offset over time. Specifically, a + bt + ct 2 + dt 3 and the 4 constants. The offset (a of approximately -485ms) is easily explained—I started the PPS at the “wrong” time. The frequency error (b of approximately -14.4 ppb) can be explained as I didn’t tune the oscillator to exactly 10 MHz. (More accurately, I tuned it, unplugged it, moved it to my server, and plugged it back in. The slightly different environment could produce a few ppb error.)

What about the c and d terms? They account for a combination of a lot of things. Temperature is a big one. First of all, it is a home server and so it is subject to air-conditioner cycling on and off at a fairly long interval. This produces sizable swings in temperature, which in turn mess with the frequency. A server in a data center sees much less temperature variation, since the chillers keep the temperature essentially constant (at least compared to homes). Second, the oscillator was behind the server and I expect the temperature to slightly vary based on load.

One could no doubt do more analysis (and maybe at some point I will), but this post is already getting way too long.

Conclusion

One can go nuts trying to play with time and time synchronization. This is my first attempt at timekeeping-related circuitry, so I’m sure there are ways to improve the circuit or the analysis.

I think this experiment was a success. The system clock behavior improved beyond what’s needed for a general purpose server. Getting under 20 ppb error from a simple circuit on a breadboard with absolutely no control loop is great. I am, of course, already tinkering with various ideas that should improve the performance.

Tribblix image structural changes The Trouble with Tribbles...

The Tribblix live ISO and related images are put together every so slightly differently in the latest m34 release.

All along, there's been an overlay (think a group package) called base-iso that lists the packages that are present in the live image. On installation, this is augmented with a few extra packages that you would expect to be present in a running system but which don't make much sense in a live image, to construct the base system.

You can add additional software, but the base is assumed to be present.

The snag with this is that base-iso is very much a single-purpose generic concept. By its very nature it has to be minimal enough to not be overly bloated, yet contain as many drivers as necessary to handle the majority of systems.

As such, the regular ISO image has fallen between 2 stools - it doesn't have every single driver, so some systems won't work, while it has a lot of unnecessary drivers for a lot of common use cases.

So what I've done is split base-iso into 2 layers. There's a new core-tribblix overlay, which is the common packages, and then base-iso adds all the extra drivers. By and large, the regular live image for m34 isn't really any different to what was present before.

But the concepts of "what packages do I need for applications to work" and "what packages do I want to load on a given downloadable ISO" have now been split.

What this allows is to easily create other images with different rules. As of m34, for example, the "minimal" image is actually created from a new base-server overlay, which again sits atop core-tribblix and differs from base-iso in that it has all the FC drivers. If you're installing on a fibre-channel connected system then using the minimal image will work better (and if you're SAN-booted, it will work where the regular ISO won't).

The next use case is that images for cloud or virtual systems simply don't need most of the drivers. This cuts out a lot of packages (although it doesn't actually save that much space).

The standard Tribblix base system now depends on core-tribblix, not base-iso or any of the specific image layers. This is as it should be - userland and applications really shouldn't care what drivers are present.

One side-effect of this change is that it makes minimising zones easier, because what gets installed in a zone can be based on that stripped-down core-tribblix overlay.

Engineering a culture Oxide Computer Company Blog

We ran into an interesting issue recently. On the one hand, it was routine: we had a bug — a regression — and the team quickly jumped on it, getting it root caused and fixed. But on the other, this particular issue was something of an Oxide object lesson, representative not just of the technologies but also of the culture we have built here. I wasn’t the only person who thought so, and two of my colleagues wrote terrific blog entries with their perspectives:

The initial work as described by Matt represents a creative solution to a thorny problem; if it’s clear in hindsight, it certainly wasn’t at the time! (In Matt’s evocative words: "One morning, I had a revelation.") I first learned of Matt’s work when he demonstrated it during our weekly Demo Friday, an hour-long unstructured session to demo our work for one another. Demo Friday is such an essential part of Oxide’s culture that it feels like we have always done it, but in fact it took us nearly two years into the company’s life to get there: over the spring and summer of 2021, our colleague Sean Klein had instituted regular demos for the area that he works on (the Oxide control plane), and others around the company — seeing the energy that came from it — asked if they, too, could start regular demos for their domain. But instead of doing it group by group, we instituted it company-wide starting in the fall of 2021: an unstructured hour once a week in which anyone can demo anything.

In the years since, we have had demos of all scopes and sizes. Importantly, no demo is too small — and we have often found that a demo that feels small to someone in the thick of work will feel extraordinary to someone outside of it. ("I have a small demo building on the work of a lot of other people" has been heard so frequently that it has become something of an inside joke.) Demo Friday is important because it gets to one of our most important drivers as technologists: the esteem of our peers. The thrill that you get from showing work to your colleagues is unparalleled — and their wonderment in return is uniquely inspiring. (Speaking personally, Matt’s demo addressed a problem that I had personally had many times over in working on Hubris — and I was one of the many w00ts in the chat, excited to see his creative solution!)

Having the demos be company-wide has also been a huge win for not just our shared empathy and teamwork but also our curiosity and versatility: it’s really inspiring to have (say) one colleague show how they used PCB backdrilling for signal integrity, and the next show an integration they built using Dropshot between our CRM and spinning up a demonstration environment for a customer. And this is more than just idle intellectual curiosity: our stack is deep — spanning both hardware and software — and the demos make for a fun and engaging way to learn about aspects of the system that we don’t normally work on.

Returning to Matt and Cliff, if Matt’s work implicitly hits on aspects of our culture, Cliff’s story of debugging addresses that culture explicitly, noting that the experience demonstrated:

Tight nonhierarchical integration of the team. This isn’t a Hubris feature, but it’s hard to separate Hubris from the team that built it. Oxide’s engineering team has essentially no internal silos. Our culture rewards openness, curiosity, and communication, and discourages defensiveness, empire-building, and gatekeeping. We’ve worked hard to create and defend this culture, and I think it shows in the way we organized horizontally, across the borders of what other organizations would call teams, to solve this mystery.

In the discussion on Hacker News of Cliff’s piece, this cultural observeration stood out, with a commenter asking:

I’d love to hear more about the motivations for crafting such a culture as well as some particular implementation details. I’m curious if there are drawbacks to fostering "openness, curiosity, and communication" within an organization?

The culture at Oxide is in fact very deliberate: when starting a company, one is building many things at once (the team, the product, the organization, the brand) — and the culture will both inform and be reinforced by all of these. Setting that first cultural cornerstone was very important to us — starting with our mission, principles, and values. Critically, by using our mission, principles, and values as the foundation for our hiring process, we have deliberately created a culture that reinforces itself.

Some of the implementation details:

  • We have uniform compensation (even if it might not scale indefinitely)

  • We are writing intensive (but we still believe in spoken collaboration)

  • We have no formalized performance review process (but we believe in feedback)

  • We record every meeting (but not every conversation)

  • We have a remote work force (but we also have an office)

  • We are non-hierarchical (but we all ultimately report to our CEO)

  • We don’t use engineering metrics (but we all measure ourselves by our customers and their success)

If it needs to be said, there is plenty of ambiguity: if you are using absolutes to think of Oxide (outside of our principles of honesty, integrity and decency!) you are probably missing some nuance of our culture.

Finally, to the (seemingly loaded?) question of the "drawbacks" of fostering "openness, curiosity, and communication" within an organization, the only drawback is that it’s hard work: culture has to be deliberate without being overly prescriptive, and that can be a tricky balance. In this regard, building a culture is very different than building (say) software: it is not engineered in a traditional sense, but is rather a gooey, squishy, organism that will evolve over time. But the reward of the effort is something that its participants care intensely about: it will continue to be (in Cliff’s words) a culture that we work hard to not just create but defend!

OmniOS is not affected by CVE-2024-3094 OmniOS Community Edition

Yesterday we learned of a supply chain back door in the xz-utils software via an announcement at https://www.openwall.com/lists/oss-security/2024/03/29/4. The vulnerability was distributed with versions 5.6.0 and 5.6.1 of xz; and has been assigned CVE-2024-3094.

OmniOS is NOT affected by CVE-2024-3094

The malicious code is only present in binary artefacts if the build system is Linux (and there are some additional constraints too) and if the system linker is GNU ld – neither of which are true for our packages. The payload is also a Linux ELF binary which would not successfully link into code built for OmniOS, and requires features which are only present in the GNU libc.

We have also only ever shipped xz-utils 5.6.x as part of the unstable bloody testing release, stable releases contain older versions:

  • r151038 ships version 5.2.6
  • r151046 ships version 5.4.2
  • r151048 ships version 5.4.4
  • bloody ships version 5.6.1

Despite being unaffected, we have now switched builds of xz in bloody to using the raw source archive, which does not contain the malicious injection code, and generating the autoconf files ourselves. We have not downgraded to an earlier version as it is not clear which earlier version can be considered completely safe given that the perpetrator has been responsible for maintaining and signing releases back to version 5.4.3. Once a cleaned 5.6.2 release is available, we will upgrade to that.


Any problems or questions, please get in touch.

Disabling Monospaced Font Ligatures Josef "Jeff" Sipek

A recent upgrade of FreeBSD on my desktop resulted in just about every program (Firefox, KiCAD, but thankfully not urxvt) rendering various ligatures even for monospaced fonts. Needless to say, this is really annoying when looking at code, etc. Not having any better ideas, I asked on Mastodon if anyone knew how to turn this mis-feature off.

About an hour later, @monwarez@bsd.cafe suggested dropping the following XML in /usr/local/etc/fonts/conf.avail/29-local-noto-mono-fixup.conf and adding a symlink in ../conf.d to enable it:

<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "urn:fontconfig:fonts.dtd">
<fontconfig>
        <description>Disable ligatures for monospaced fonts to avoid ff, fi, ffi, etc. becoming only one character wide</description>
        <match target="font">
                <test name="family" compare="eq">
                        <string>Noto Sans Mono</string>
                </test>
                <edit name="fontfeatures" mode="append">
                        <string>liga off</string>
                        <string>dlig off</string>
                </edit>
        </match>
</fontconfig>

This solved my problem. Hopefully this will help others. if not, it’s a note-to-self for when I need to reapply this fixup :)

Moore's Scofflaws Oxide Computer Company Blog

Years ago, Jeff Bezos famously quipped that "your margin is my opportunity." This was of course aimed not at Amazon’s customers, but rather its competitors, and it was deadly serious: customers of AWS in those bygone years will fondly remember that every re:Invent brought with it another round of price cuts. This era did not merely reflect Bezos’s relentless execution, but also a disposition towards who should reap the reward of advances in underlying technology: Amazon believed (if implicitly) that improvements at the foundations of computing (e.g., in transistor density, core count, DRAM density, storage density, etc.) should reflect themselves in lower prices for consumers rather than higher margins for suppliers.

Price cuts are no longer a re:Invent staple, having been replaced by a regular Amazon tradition of a different flavor: cutting depreciation (and therefore increasing earnings) by extending the effective life of their servers. (These announcements are understandably much more subdued, as "my depreciation is my margin opportunity" doesn’t have quite the same ring to it.)

As compute needs have grown and price cuts have become an increasingly distant memory, some have questioned their sky-high cloud bills, wondering if they should in fact be owning their compute instead of renting it. When we started Oxide, we knew from operating our own public cloud what those economics looked like — and we knew that over time others of a particular scale would come to the same realization that they would be better off not giving their margin away by renting compute. (Though it’s safe to say that we did not predict that it would be DHH leading the charge!)

Owning one’s own cloud sounds great, but there is a bit that’s unsaid: what about the software? Software is essential for elastic, automated infrastructure: hardware alone does not a cloud make! Unfortunately, the traditional server vendors do not help here: because of a PC-era divide in how systems are delivered, customers are told to look elsewhere for any and all system software. This divide is problematic on several levels. First, it impedes the hardware/software co-design that we (and, famously, others!) believe is essential to deliver the best possible product. Second, it leads to infamous finger pointing when the whole thing doesn’t work. But there is also a thorny economic problem: when your hardware and your software don’t come from the same provider, to whom should go the spoils of better hardware?

To someone who has just decided to buy their hardware out of their frustration with renting it, the answer feels obvious: whoever owns the hardware should naturally benefit from its advances! Unfortunately, the enterprise software vendor delivering your infrastructure often has other ideas — and because their software is neither rented nor bought, but rather comes from the hinterlands of software licensing, they have broad latitude as to how it is priced and used. In particular, this allows them to charge based on the hardware that you run it on — to have per-core software licensing.

This galling practice isn’t new (and is in fact as old as symmetric multiprocessing systems), but it has taken on new dimensions in the era of chiplets and packaging innovation: the advances that your next CPU has over your current one are very likely to be expressed in core count. Per-core licensing allows a third party — who neither made the significant investment in developing the next generation of microprocessor nor paid for the part themselves — to exact a tax on improved infrastructure. (And this tax can be shockingly brazen!) Couple this with the elimination of perpetual licensing, and software costs can potentially absorb the entire gain from a next-generation CPU, leaving a disincentive to run newer, more efficient infrastructure. As an industry, we have come to accept this practice, but we shouldn’t: in the go-go era of Dennard scaling (when clock rates rose at a blistering rate), software vendors never would have been allowed to get away with charging by the gigahertz; we should not allow them to feel so emboldened to charge by core count now!

If it needs to be said, we have taken a different approach at Oxide: when you buy the Oxide cloud computer, all of the software to run it is included. This includes all of the software necessary to run the rack as elastic infrastructure: virtual compute, virtual storage, virtual networking. (And yes, it’s all open source — which unfortunately demands the immediate clarification that it’s actually open source rather than pretend open source.) When we add a new feature to our software, there is no licensing enablement or other such nuisance — the feature just comes with the next update. And what happens when AMD releases a new CPU with twice the core count? The new sled running the new CPU runs along your existing rack — you’re not paying more than the cost of the new sled itself. This gives the dividends of Moore’s Law (or Wright’s Law!) to whom they rightfully belong: the users of compute.

The SunOS JDK builder The Trouble with Tribbles...

I've been building OpenJDK on Solaris and illumos for a while.

This has been moderately successful; illumos distributions now have access to up to date LTS releases, most of which work well. (At least 11 and 17 are fine; 21 isn't quite right.)

There are even some third-party collections of my patches, primarily for Solaris (as opposed to illumos) builds.

I've added another tool. The SunOS jdk builder.

The aim here is to be able to build every single jdk tag, rather than going to one of the existing repos which only have the current builds. And, yes, you could grope through the git history to get to older builds, but one problem with that is that you can't actually fix problems with past builds.

Most of the content is in the jdk-sunos-patches repository. Here there are patches for both illumos and Solaris (they're ever so slightly different) for every tag I've built.

(That's almost every jdk tag since the Solaris/SPARC/Studio removal, and a few before that. Every so often I find I missed one. And there's been the odd bad patch along the way.)

The idea here is to make it easy to build every tag, and to do so on a current system. I've had to add new patches to get some of the older builds to work. The world has changed, we have newer compilers and other tools, and the OS we're building on has evolved. So if someone wanted to start building the jdk from scratch (and remember that you have to build all the versions in sequence) then this would be useful.

I'm using it for a couple of other things.

One is to put back SPARC support on illumos and Solaris. The initial port I did was on x86 only, so I'm walking through older builds and getting them to work on SPARC. We'll almost certainly not get to jdk21, but 17 seems a reasonable target.

The other thing is to enable the test suites, and then run them, and hopefully get them clean. At the moment they aren't, but a lot of that is because many tests are OS-specific and they don't know what Solaris is so get confused. With all the tags, I can bisect on failures and (hopefully) fix them.

What punch cards teach us about AI risk The Observation Deck

I (finally) read Edwin Black’s IBM and the Holocaust, and I can’t recommend it strongly enough. This book had been on my queue for years, and I put it off for the same reason that you have probably put it off: we don’t like to confront difficult things. But the book is superlative: not only is it fascinating and well-researched but given the current level of anxiety about the consequences of technological development, it feels especially timely. Black makes clear in his preface that IBM did not cause the Holocaust (unequivocally, the Holocaust would have happened without IBM), but he also makes clear in the book that information management was essential to every aspect of the Nazi war machine — and that that information management was made possible through IBM equipment and (especially) their punch cards.

I have known little of computing before the stored program computer, and two aspects of punch card systems of this era were surprising to me: first, to assure correct operation in these most mechanical of systems, the punch cards themselves must be very precisely composed, manufactured, and handled — and the manufacturing process itself is difficult to replicate. Second, punch cards of this era were essentially single-use items: once a punch card had been through a calculation, it had to be scrapped. Given that IBM was the only creator of punch cards for its machines, this may sound like an early example of the razor blade model, but it is in fact even more lucrative: IBM didn’t sell the machines at a discount because they didn’t sell the machines at all — they rented them. This was an outrageously profitable business model, and a reflection of the most dominant trait of its CEO, Thomas J. Watson: devotion to profit over all else.

In the Nazis, Watson saw a business partner to advance that profit — and they saw in him an American advocate for appeasement, with Hitler awarding Watson its highest civilian medal in 1937. (In this regard, the Nazis themselves didn’t understand that Watson cared only about profit: unlike other American Nazi sympathizers, Watson would support an American war effort if he saw profit in it — and he publicly returned the medal after the invasion of Holland in 1940, when public support of the Nazis had become a clear commercial liability.) A particularly revealing moment with respect to Watson’s disposition was in September 1939 (after the invasion of Poland!) when IBM’s German subsidiary (known at the time as Dehomag) made the case to him that the IBM 405 alphabetizers owned by IBM’s Austrian entity in the annexed Austria now belonged to the German entity to lease as they please. These particular alphabetizers were important: the 405 was an order of magnitude improvement over the IBM 601 — and it was not broadly found in Europe. Watson resisted handing over the Austrian 405s, though not over any point of principle, but rather of avarice: in exchange for the 405s, he demanded (as he had throughout the late 1930s) that he have complete ownership of IBM’s German subsidiary rather than the mere 90% that IBM controlled. The German subsidiary refused the demand and ultimately Watson relented — and the machines effectively became enlisted as German weapons of war.

IBM has made the case that it did not know how its machines were used to effect the Holocaust, but this is hard to believe given Watson’s level of micromanagement of the German subsidiary through Switzerland during the war: IBM knew which machines were where (and knew, for example, that concentration camps all had ample sorters and tabulators), to the point that the company was able to retrieve them all after the war — along with the profits that the machines had earned.

This all has much to teach us about the present day with respect to the true risks of technology. Technology serves as a force-multiplier on humanity, for both better and ill. The most horrific human act — genocide — requires organization and communication, two problems for which we have long developed technological solutions. Whether it was punch cards and tabulators in the Holocaust, radio transmission in the Rwandan Genocide, or Facebook in the Rohingya genocide, technology has sadly been used as an essential tool for our absolute worst. It may be tempting to blame the technology itself, but that in fact absolves the humans at the helm. Should we have stymied the development of tabulators and sorters in the 1920s and 1930s? No, of course not. And nor, for that matter, should Rwanda have been deprived of radio or Myanmar of social media. But this is not to say that we should ignore technology’s role, either: the UN erred in not destroying the radio transmission capabilities in Rwanda; Facebook erred by willfully ignoring the growing anti-Rohingya violence; and IBM emphatically erred by being willing to supply the Nazis in the name of its own profits.

To bring this into the present day: as I relayed in my recent Monktoberfest talk, the fears of AI autonomously destroying humanity are worse than nonsense, because they distract us from the very real possibilities of how AI may be abused. To allow ourselves to even contemplate a prohibition of the development of certain kinds of computer programs is to delude ourselves into thinking that the problem is a technical problem rather than a human one. Worse, the very absurdity of prohibition has itself created a reactionary movement in the so-called “effective accelerationists” who, like some AI equivalent of rolling coal, refuse to contemplate any negative ramifications of technological development whatsoever. This, too, is grievously wrong, and we need look no further than IBM’s involvement in the Holocaust to see the peril of absolute adherence to technology-based profit.

So what course to chart with respect to the (real, human) risks of AI? We should consider another important fact of IBM’s involvement with the Nazis: IBM itself skirted the law. Some the most interesting findings in Black’s book are from the US Department of Treasury’s 1943 investigation into IBM’s collusion with Hitler. The investigator — Harold Carter — had plenty of evidence that IBM was violating the Trading with the Enemy Act, but Watson had also so thoroughly supported the Allied war effort that he was unassailable within the US. We already have regulatory regimes with respect to safety: you can’t just obtain fissile material or make a bioweapon — it doesn’t matter if ChatGPT told you to do it or not. We should be unafraid to enforce existing laws. Believing that (say) Uber was wrong to illegally put their self-driving cars on the street does not make one a “decel” or whatever — it makes one a believer in the rule of law in a democratic society. That this sounds radical — that one might believe in a democracy that creates laws, affords companies economic freedom within those laws, and enforces those laws against companies that choose to violate them — says much about our divisive times.

And all of this brings us to the broadest lesson of IBM and the Holocaust: technological development is by its nature new — a lurch into the unknown and unexplored — but as I have discovered over and over again in my career, history has much to teach us. Even though the specifics of the technologies we work on may be without precedent, the humanity they serve to advance endures across generations; those who fret about the future would be well advised to learn from the past!

Building up networks of zones on Tribblix The Trouble with Tribbles...

With OpenSolaris and derivatives such as illumos, we gained the ability to build a whole IT infrastructure in a single box, using virtualized networking (crossbow) to build the underlying network and then attaching virtualized systems (zones) atop virtualized storage (zfs).

Some of this was present in Solaris 10, but it didn't have crossbow so the networking piece was a bit tricky (although I did manage to get surprisingly far by abusing the loopback interface).

In Tribblix, I've long had the notion of a router or proxy zone, which acts as a bridge between the outside world and a local virtual subnet. For the next release I've been expanding that into something much more flexible and capable.

What did I need to put this together?

The first thing is a virtual network. You use dladm to create an etherstub. Think of that as a virtual switch you can connect network links to.

To connect that to the world, a zone is created with 2 network interfaces (vnics). One over the system interface so it can connect to the outside world, and one over the etherstub.

That special router zone is a little bit more than that. It runs NAT to allow any traffic on the internal subnet - simple NAT, nothing complicated here. In order to do that the zone has to have IPFilter installed, and the zone creation script creates the right ipnat configuration file and ensures that IPFilter is started.

You also need to have IPFilter installed in the global zone. It doesn't have to be running there, but the installation is required to create the IPFilter devices. Those IPFilter devices are then exposed to the zone, and for that to work the zone needs to use exclusive-ip networking rather than shared-ip (and would need to do so anyway for packet forwarding to work).

One thing I learnt was that you can't lock the router zone's networking down with allowed-address. The anti-spoofing protection that allowed-address gives you prevents forwarding and breaks NAT.

The router zone also has a couple of extra pieces of software installed. The first is haproxy, which is intended as an ingress controller. That's not currently used, and could be replaced by something else. The second is dnsmasq, which is used as a dhcp server to configure any zones that get connected to the subnet.

With a network segment in place, and a router zone for management, you can then create extra zones.

The way this works in Tribblix is that if you tell zap to create a zone with an IP address that is part of a private subnet, it will attach its network to the corresponding etherstub. That works fine for an exclusive-ip zone, where the vnic can be created directly over the etherstub.

For shared-ip zones it's a bit trickier. The etherstub isn't a real network device, although for some purposes (like creating a vnic) it looks like one. To allow shared-ip, I create a dedicated shared vnic over the etherstub, and the virtual addresses for shared-ip zones are associated with that vnic. For this to work, it has to be plumbed in the global zone, but doesn't need an address there. The downside to the shared-ip setup (or it might be an upside, depending on what the zone's going to be used for) is that in this configuration it doesn't get a network route; normally this would be inherited off the parent interface, but there isn't an IP configuration associated with the vnic in the global zone.

The shared-ip zone is handed its IP address. For exclusive-ip zones, the right configuration fragment is poked into dnsmasq on the router zone, so that if the zone asks via dhcp it will get the answer you configured. Generally, though, if I can directly configure the zone I will. And that's either by putting the right configuration into the files in a zone so it implements the right networking at boot, or via cloud-init. (Or, in the case of a solaris10 zone, I populate sysidcfg.)

There's actually a lot of steps here, and doing it by hand would be rather (ahem, very) tedious. So it's all automated by zap, the package and system administration tool in Tribblix. The user asks for a router zone, and all it needs to be given is the zone's name, the public IP address, and the subnet address, and all the work will be done automatically. It saves all the required details so that they can be picked up later. Likewise for a regular zone, it will do all the configuration based on the IP address you specify, with no extra input required from the user.

The whole aim here is to make building zones, and whole systems of zones, much easier and more reliable. And there's still a lot more capability to add.

A Gap in the TrustZone Preset Settings for the LPC55S69 Oxide Computer Company Blog

We’re very excited to have announced the general availability of our cloud computer. As part of this work, we continue to build on top of the LPC55S69 from NXP as our Root of Trust. We’ve discovered some gaps when using TrustZone preset settings on the LPC55S69 that can allow for unexpected behavior including enabling debug settings and exposure of the UDS (Unique Device Secret). These issues require a signed image or access at manufacturing time.

How to (safely, securely) configure a chip

The LPC55S69 uses the Armv8-m architecture which includes TrustZone-M. We’ve previously discussed some aspects of the Armv8-m architecture and presented on it in more detail. Fundamentally, setting up TrustZone-M is simply a matter of putting the right values in the right registers. The word "simply" is, of course, doing a lot of heavy lifting here. TrustZone-M must also be set up in conjunction with the Memory Protection Unit (MPU) and any other vendor specific security settings. Once the ideal settings have been decided upon, there’s still the matter of actually performing the register programming sequence. NXP offers a feature called TrustZone preset data to make this programming easier. Register data may optionally be appended to the end of an image for the LPC55S69, and the ROM will set the registers before jumping into the user image. Some of those registers may also be configured to prevent futher modification. This means the user image does not need to be concerned with the settings for those registers.

The structure used to configure the registers looks like the following:

              typedef struct _tzm_secure_config
{
  uint32_t cm33_vtor_addr;  /*! CM33 Secure vector table address */
  uint32_t cm33_vtor_ns_addr; /*! CM33 Non-secure vector table address */
  uint32_t cm33_nvic_itns0; /*! CM33 Interrupt target non-secure register 0 */
  uint32_t cm33_nvic_itns1; /*! CM33 Interrupt target non-secure register 1 */
  uint32_t mcm33_vtor_addr; /*! MCM33 Secure vector table address */
  uint32_t cm33_mpu_ctrl; /*! MPU Control Register.*/
  uint32_t cm33_mpu_mair0; /*! MPU Memory Attribute Indirection Register 0 */
  uint32_t cm33_mpu_mair1; /*! MPU Memory Attribute Indirection Register 1 */
  uint32_t cm33_mpu_rbar0; /*! MPU Region 0 Base Address Register */
  uint32_t cm33_mpu_rlar0; /*! MPU Region 0 Limit Address Register */
  uint32_t cm33_mpu_rbar1; /*! MPU Region 1 Base Address Register */
  uint32_t cm33_mpu_rlar1; /*! MPU Region 1 Limit Address Register */
  uint32_t cm33_mpu_rbar2; /*! MPU Region 2 Base Address Register */
  uint32_t cm33_mpu_rlar2; /*! MPU Region 2 Limit Address Register */
  uint32_t cm33_mpu_rbar3; /*! MPU Region 3 Base Address Register */
  uint32_t cm33_mpu_rlar3; /*! MPU Region 3 Limit Address Register */
  uint32_t cm33_mpu_rbar4; /*! MPU Region 4 Base Address Register */
  uint32_t cm33_mpu_rlar4; /*! MPU Region 4 Limit Address Register */
  uint32_t cm33_mpu_rbar5; /*! MPU Region 5 Base Address Register */
  uint32_t cm33_mpu_rlar5; /*! MPU Region 5 Limit Address Register */
  uint32_t cm33_mpu_rbar6; /*! MPU Region 6 Base Address Register */
  uint32_t cm33_mpu_rlar6; /*! MPU Region 6 Limit Address Register */
  uint32_t cm33_mpu_rbar7; /*! MPU Region 7 Base Address Register */
  uint32_t cm33_mpu_rlar7; /*! MPU Region 7 Limit Address Register */
  uint32_t cm33_mpu_ctrl_ns; /*! Non-secure MPU Control Register.*/
  uint32_t cm33_mpu_mair0_ns; /*! Non-secure MPU Memory Attribute Register 0 */
  uint32_t cm33_mpu_mair1_ns; /*! Non-secure MPU Memory Attribute Register 1 */
  uint32_t cm33_mpu_rbar0_ns; /*! Non-secure MPU Region 0 Base Address Register */
  uint32_t cm33_mpu_rlar0_ns; /*! Non-secure MPU Region 0 Limit Address Register */
  uint32_t cm33_mpu_rbar1_ns; /*! Non-secure MPU Region 1 Base Address Register */
  uint32_t cm33_mpu_rlar1_ns; /*! Non-secure MPU Region 1 Limit Address Register */
  uint32_t cm33_mpu_rbar2_ns; /*! Non-secure MPU Region 2 Base Address Register */
  uint32_t cm33_mpu_rlar2_ns; /*! Non-secure MPU Region 2 Limit Address Register */
  uint32_t cm33_mpu_rbar3_ns; /*! Non-secure MPU Region 3 Base Address Register */
  uint32_t cm33_mpu_rlar3_ns; /*! Non-secure MPU Region 3 Limit Address Register */
  uint32_t cm33_mpu_rbar4_ns; /*! Non-secure MPU Region 4 Base Address Register */
  uint32_t cm33_mpu_rlar4_ns; /*! Non-secure MPU Region 4 Limit Address Register */
  uint32_t cm33_mpu_rbar5_ns; /*! Non-secure MPU Region 5 Base Address Register */
  uint32_t cm33_mpu_rlar5_ns; /*! Non-secure MPU Region 5 Limit Address Register */
  uint32_t cm33_mpu_rbar6_ns; /*! Non-secure MPU Region 6 Base Address Register */
  uint32_t cm33_mpu_rlar6_ns; /*! Non-secure MPU Region 6 Limit Address Register */
  uint32_t cm33_mpu_rbar7_ns; /*! Non-secure MPU Region 7 Base Address Register */
  uint32_t cm33_mpu_rlar7_ns; /*! Non-secure MPU Region 7 Limit Address Register */
  uint32_t cm33_sau_ctrl;
  uint32_t cm33_sau_rbar0;/*! SAU Region 0 Base Address Register */
  uint32_t cm33_sau_rlar0;/*! SAU Region 0 Limit Address Register */
  uint32_t cm33_sau_rbar1;/*! SAU Region 1 Base Address Register */
  uint32_t cm33_sau_rlar1;/*! SAU Region 1 Limit Address Register */
  uint32_t cm33_sau_rbar2;/*! SAU Region 2 Base Address Register */
  uint32_t cm33_sau_rlar2;/*! SAU Region 2 Limit Address Register */
  uint32_t cm33_sau_rbar3;/*! SAU Region 3 Base Address Register */
  uint32_t cm33_sau_rlar3;/*! SAU Region 3 Limit Address Register */
  uint32_t cm33_sau_rbar4;/*! SAU Region 4 Base Address Register */
  uint32_t cm33_sau_rlar4;/*! SAU Region 4 Limit Address Register */
  uint32_t cm33_sau_rbar5;/*! SAU Region 5 Base Address Register */
  uint32_t cm33_sau_rlar5;/*! SAU Region 5 Limit Address Register */
  uint32_t cm33_sau_rbar6;/*! SAU Region 6 Base Address Register */
  uint32_t cm33_sau_rlar6;/*! SAU Region 6 Limit Address Register */
  uint32_t cm33_sau_rbar7;/*! SAU Region 7 Base Address Register */
  uint32_t cm33_sau_rlar7;/*! SAU Region 7 Limit Address Register */
  uint32_t flash_rom_slave_rule;/*! FLASH/ROM Slave Rule Register 0 */
  uint32_t flash_mem_rule0;/*! FLASH Memory Rule Register 0 */
  uint32_t flash_mem_rule1;/*! FLASH Memory Rule Register 1 */
  uint32_t flash_mem_rule2;/*! FLASH Memory Rule Register 2 */
  uint32_t rom_mem_rule0;/*! ROM Memory Rule Register 0 */
  uint32_t rom_mem_rule1;/*! ROM Memory Rule Register 1 */
  uint32_t rom_mem_rule2;/*! ROM Memory Rule Register 2 */
  uint32_t rom_mem_rule3;/*! ROM Memory Rule Register 3 */
  uint32_t ramx_slave_rule;
  uint32_t ramx_mem_rule0;
  uint32_t ram0_slave_rule;
  uint32_t ram0_mem_rule0;/*! RAM0 Memory Rule Register 0 */
  uint32_t ram0_mem_rule1;/*! RAM0 Memory Rule Register 1 */
  uint32_t ram1_slave_rule; /*! RAM1 Memory Rule Register 0 */
  uint32_t ram1_mem_rule1;/*! RAM1 Memory Rule Register 1 */
  uint32_t ram2_mem_rule1;/*! RAM2 Memory Rule Register 1 */
  uint32_t ram3_mem_rule0;/*! RAM3 Memory Rule Register 0 */
  uint32_t ram3_mem_rule1;/*! RAM3 Memory Rule Register 1 */
  uint32_t ram4_slave_rule;
  uint32_t ram2_mem_rule0;
  uint32_t ram3_slave_rule;
  uint32_t ram1_mem_rule0;
  uint32_t ram2_slave_rule;
  uint32_t ram4_mem_rule0;/*! RAM4 Memory Rule Register 0 */
  uint32_t apb_grp_slave_rule;/*! APB Bridge Group Slave Rule Register */
  uint32_t apb_grp0_mem_rule0;/*! APB Bridge Group 0 Memory Rule Register 0 */
  uint32_t apb_grp0_mem_rule1;/*! APB Bridge Group 0 Memory Rule Register 1 */
  uint32_t apb_grp0_mem_rule2;/*! APB Bridge Group 0 Memory Rule Register 2 */
  uint32_t apb_grp0_mem_rule3;/*! APB Bridge Group 0 Memory Rule Register 3 */
  uint32_t apb_grp1_mem_rule0;/*! APB Bridge Group 1 Memory Rule Register 0 */
  uint32_t apb_grp1_mem_rule1;/*! APB Bridge Group 1 Memory Rule Register 1 */
  uint32_t apb_grp1_mem_rule2;/*! APB Bridge Group 1 Memory Rule Register 2 */
  uint32_t apb_grp1_mem_rule3;/*! APB Bridge Group 1 Memory Rule Register 3 */
  uint32_t ahb_periph0_slave_rule0;/*! AHB Peripherals 0 Slave Rule Register 0 */
  uint32_t ahb_periph0_slave_rule1;/*! AHB Peripherals 0 Slave Rule Register 1 */
  uint32_t ahb_periph1_slave_rule0;/*! AHB Peripherals 1 Slave Rule Register 0 */
  uint32_t ahb_periph1_slave_rule1;/*! AHB Peripherals 1 Slave Rule Register 1 */
  uint32_t ahb_periph2_slave_rule0;/*! AHB Peripherals 2 Slave Rule Register 0 */
  uint32_t ahb_periph2_slave_rule1;/*! AHB Peripherals 2 Slave Rule Register 1 */
  uint32_t ahb_periph2_mem_rule0;/*! AHB Peripherals 2 Memory Rule Register 0*/
  uint32_t usb_hs_slave_rule0; /*! HS USB Slave Rule Register 0 */
  uint32_t usb_hs__mem_rule0; /*! HS USB Memory Rule Register 0 */
  uint32_t sec_gp_reg0;/*! Secure GPIO Register 0 */
  uint32_t sec_gp_reg1;/*! Secure GPIO Register 1 */
  uint32_t sec_gp_reg2;/*! Secure GPIO Register 2 */
  uint32_t sec_gp_reg3;/*! Secure GPIO Register 3 */
  uint32_t sec_int_reg0;/*! Secure Interrupt Mask for CPU1 Register 0 */
  uint32_t sec_int_reg1;/*! Secure Interrupt Mask for CPU1 Register 1 */
  uint32_t sec_gp_reg_lock;/*! Secure GPIO Lock Register */
  uint32_t master_sec_reg;/*! Master Secure Level Register */
  uint32_t master_sec_anti_pol_reg;
  uint32_t cm33_lock_reg; /*! CM33 Lock Control Register */
  uint32_t mcm33_lock_reg; /*! MCM33 Lock Control Register */
  uint32_t misc_ctrl_dp_reg;/*! Secure Control Duplicate Register */
  uint32_t misc_ctrl_reg;
  uint32_t misc_tzm_settings;
} tzm_secure_config_t;

An implementation detail of the ROM is that the settings for these registers are (mostly) applied in the order shown in the structure. This means that the very first register that gets changed is VTOR which switches the vector table from the one in the ROM to the user provided one. Any faults that occur after VTOR is changed will be handled by user code, not ROM code. This turns out to have some "interesting" side effects.

(Un)locking debug access

The LPC55S69 offers debug access via standard ARM interfaces (SWD). Debug access can be configured to be always available, always disabled, or only available to authenticated users. These settings are designed to be applied at manufacturing time via the CMPA region. Debugging is disabled by default while executing in the ROM and only enabled (if allowed) as the very last step before jumping to user code. The debug settings are also locked out, preventing further modification from user code except in specific authenticated circumstances. Because debug access is highly sensitive, it makes sense to minimize the amount of time the ROM spends with it enabled.

If the debug settings are applied last, this means that the TrustZone preset settings must be applied before them. Combine this information with the implementation detail of how the preset setting are applied, if the code faults after VTOR is changed but before we apply the debug settings, it will be possible to run in user controlled code with debug registers open for modification.

How easy is it to actually trigger this? Very easy. Other registers in the preset structure include settings for the MPU. Setting the enable bit in MPU_CTRL without any other regions set is enough to trigger the fault. NXP actually says in their manual that you need to make sure the entire ROM region is configured as secure privileged and executable otherwise "boot process will fail". "fail" in this case is vectoring off into the appropriate fault handler of the user code.

This makes the following sequence possible:

  • Have debug disabled in the CMPA

  • Sign an image with TrustZone preset settings with a valid VTOR and MPU settings that exclude the ROM region

  • Have the MemManage fault handler follow the standard sequence to enable debugging

  • The image will trigger the fault handler and have debugging enabled despite the settings in the CMPA

This does require access to the secure boot signing key, but it’s a departure from the presentation of the CMPA settings as being independent of any possible settings in an image.

Extracting the UDS

One additional step in the setting of the debug registers is a final lockout of some PUF registers. The PUF (Physically Unclonable Function) is designed to tie secrets to a specific chip. When a secret is PUF encoded, it can only be decoded by that specific chip. The LPC55S69 uses the PUF to encode the Unique Device Secret (UDS) for use as the basis of a DICE identity. To ensure the identity is tied to the specific chip and cannot be cloned, access to the PUF index for the UDS is locked out after it is used.

The UDS is always locked out for secure boot images, but the ROM relies on the code path for debug settings to lock out for non-secure images. TrustZone preset settings can be used with non-secure CRC images which means that the previously described issue can be used to extract the UDS since the final lockout will never occur.

Requiring an unsigned image significantly limits the impact to cases such as the following:

  • Attacker at manufacturing time runs ISP command to generate the UDS on an unprogrammed LPC55S69

  • Attacker runs an unsigned image with a buggy TrustZone preset to extract the UDS

  • Attacker continues on with the rest of the manufacturing sequence, making sure not to re-generate the extracted UDS

This may be mitigated with sufficient tooling at manufacturing time but the issue still remains.

Is this a security issue?

There was disagreement between Oxide and NXP about whether this qualified as a true security vulnerability (Oxide’s opinion) vs. a gap in design and documentation (NXP’s opinion). The areas of disagreement were related to what exactly it was possible to do with these issues and what was required to make them happen. Unlocking the debug ports requires access to the secure boot signing keys and arguably if you can sign something with a bad TrustZone preset you don’t need to bother with debug port access; once your secure boot integrity has been compromised all bets are off. Oxide believes this undersells the potential for mitigation and why this should be considered a security issue: there could be circumstances where having debug port access would make extracting assets significantly easier.

Transparency is an Oxide value and that is what we strive for in bug reporting. Our goal is to make sure that issues are acknowledged and information about the bug is made widely available. NXP agreed to acknowledge this issue as a non-security errata and there will not be a CVE filed at this time. Given the narrow scope and lack of agreement between Oxide and NXP, filing a CVE would provide little benefit. If new information were to come to light from Oxide, NXP, or other researchers who are interested in our findings, we would re-evaluate this decision.

We are pleased that NXP is choosing to protect its customers by informing them of this gap. A bigger takeaway from this issue is to understand the limitations of secure/verified boot. A proper secure boot implementation will ensure that the only code that runs on a device is code that has been signed with an appropriate private key. Secure boot provides no assertions about the implementation of that code. The strength of secure boot is bounded by the code you choose to sign. In the absence of a fix for this errata, we will not be using the TrustZone preset data. If other customers choose to continue using TrustZone preset data they will need to be diligent about validating their inputs to avoid introducing gaps in the security model. Oxide has a commitment to open firmware to ensure our customers can have confidence in what code we will be signing to run on their machines.

Timeline

2023-08-16

Oxide discovers issue while reviewing image settings

2023-08-21

Oxide discloses issue to NXP PSIRT with a disclosure deadline of 2023-11-20

2023-08-21

Oxide PSIRT acknowledges receipt

2023-10-11

NXP requests meeting to discuss the report with Oxide

2023-10-19

Oxide and NXP meet to discuss the reported issues

2023-10-23

Oxide suggests documentation clarifications

2023-10-27

NXP agress to issue an errata

2023-11-20

Oxide publishes this blog post as a disclosure

Is it worse for John Fisher? The Observation Deck

“It’s been worse for me than for you.” These extraordinary words came out of the mouth of John Fisher, incompetent owner of the Oakland Athletics, on the eve of getting approval from Major League Baseball to rip its roots out of the East Bay.

I have been reflecting a lot on these words. Strictly from a public relations point of view, they are gobsmackingly disrespectful, plumbing new depths of malpractice even for the worst ownership in sports. And of course, they are obviously wrong, as this clumsy move is worse for literally everyone else than it is for John Fisher. It is worse for the fans having their hearts ripped out; worse for the Oakland employees losing their jobs; worse for the many small businesses that make their livelihood on the team; worse for the players who have been told their entire athletic careers to take accountability only to be forced to watch in silence as their skinflint ownership takes none.

But there is a kind of truth to these words too, in that there are ways that it is worse for Fisher, for we have things that he cannot. Take, for example, the Reverse Boycott, the game on June 13th, 2023 when Oakland fans deliberately attended to show that we are, in fact, not the problem. Everything about that game was extraordinary: the energy was post-season electric as the worst-in-baseball A’s entered the game with a best-in-baseball win streak. The Coliseum was rocking, in a way that only the Coliseum can. Then, at the top of the 5th inning, the fans fell silent in protest of the move to Las Vegas. There was no plan beyond this; no one really knew what would happen when the silence ended. What happened next was spontaneous, straight from a shared heart that was breaking: a deafening chant, rolling and crashing over the stadium. “SELL! THE! TEAM! SELL! THE! TEAM!” (I accidentally recorded this; you can hear the emotion in my own voice — and that of my 11-year-old daughter next to me.) The game ended as only fiction would have it: with Trevor May striking out the best team in baseball to seal an improbable win for Oakland. The biggest surprise of the night was the sheer joy of it all: it was a New Orleans funeral for Oakland baseball, and we were glad to be there as a family. As I told my kids on the drive home, it was a night that they would one day tell their own grandchildren about.

How is it that a baseball game can conjure such emotion, let alone one from a losing franchise with a signed death warrant? Because, simply: sports are about much more than what’s on the field. Sports bring us together — they bind us across generation, disposition, and circumstance. A family that might agree on little else may shout in indignant agreement that that wasn’t pass interference or that he was obviously safe. They give us solidarity with one another: they give us stuff to believe in together, to celebrate together — and to grieve for together. In short, sports are the raw id of our own humanity. The Reverse Boycott distilled all of it into a single, singular night — binding us together in the kind of shared adversity that has always been the stuff of tribal legend.

And it is in this regard that John Fisher might be right: it is, in fact worse for him, because this shared humanity of sports eludes him. His camera roll is not filled with A’s-themed birthday parties, or of selfies with his kids in rally caps, or of toddlers running the bases late on a Sunday afternoon. It would be tempting to say that he instead sees sports as only a business, but even this gives him too much credit: the only business he knows is assuring the mechanics of inheritance — of hoarding the spoils of his birth. In this regard, he is at least passably capable: he took MLB at its word that it would cut off his welfare payments if he did not secure a stadium deal by January 2024, and dutifully secured a deal, however obviously disastrous. It’s worse for John Fisher because this has all been laid bare: the real cost of securing his allowance is that his ineptitude is no longer merely an open secret among beleaguered A’s fans — he is now MLB’s famous failson, the Connor Roy of professional sports.

Whatever success John Fisher may find in Las Vegas, he will not be able to outrun the wreckage he is leaving behind here in Oakland. In John Fisher’s obituary, it will not speak of what he built, but of what he broke; not of what he gave, but of what he took away. He will be a stain on his family, who will spend their lives trying to apologize for him. He himself will find that no amount of success will absolve him of the scar that he is leaving on the East Bay’s heart. And the much more likely scenario — abject commercial failure — will merely confirm for him his own nightmares: that he is exactly the klutz and dunce that he surely fears himself to be. And if John Fisher will always be searching for what he cannot get, we Oakland A’s fans will always have what cannot be taken away: our solidarity with — and love for — one another. We are raucous, brainy, creative, and eclectic; our lives are richer for having one another in them. John Fisher has none of this, and never will. As terrible as it is for us, it may indeed be worse for him.

OmniOS Community Edition r151048 OmniOS Community Edition

OmniOSce v11 r151048 is out!

On the 6th of November 2023, the OmniOSce Association has released a new stable version of OmniOS - The Open Source Enterprise Server OS. The release comes with many tool updates, brand-new features and additional hardware support. For details see the release notes.

Note that r151044 is now end-of-life. You should upgrade to r151046 or r151048 to stay on a supported track. r151046 is an LTS release with support until May 2026, and r151048 is a stable release with support until November 2024.

For anyone who tracks LTS releases, the previous LTS - r151038 - now enters its last six months. You should plan to upgrade to r151046 for continued LTS support.

OmniOS is fully Open Source and free. Nevertheless, it takes a lot of time and money to keep maintaining a full-blown operating system distribution. Our statistics show that there are almost 2’000 active installations of OmniOS while fewer than 20 people send regular contributions. If your organisation uses OmniOS based servers, please consider becoming a regular patron or taking out a support contract.


Any problems or questions, please get in touch.

Keeping python modules in check The Trouble with Tribbles...

Any operating system distribution - and Tribblix is no different - will have a bunch of packages for python modules.

And one thing about python modules is that they tend to depend on other python modules. Sometimes a lot of python modules. Not only that, the dependency will be on a specific version - or range of versions - of particular modules.

Which opens up the possibility that two different modules might require incompatible versions of a module they both depend on.

For a long time, I was a bit lax about this. Most of the time you can get away with it (often because module writers are excessively cautious about newer versions of their dependencies). But occasionally I got bitten by upgrading a module and breaking something that used it, or breaking it because a dependency hadn't been updated to match.

So now I always check that I've got all the dependencies listed in packaging with

pip3 show modulename

and every time I update a module I check the dependencies aren't broken with

pip3 check

Of course, this relies on the machine having all the (interesting) modules installed, but on my main build machine that is generally true.

If an incompatibility is picked up by pip3 check then I'll either not do the update, or update any other modules to keep in sync. If an update is impossible, I'll take a note of which modules are blockers, and wait until they get an update to unjam the process.

A case in point was that urllib3 went to version 2.x recently. At first, nothing would allow that, so I couldn't update urllib3 at all. Now we're in a situation where I have one module I use that won't allow me to update urllib3, and am starting to see a few modules requiring urllib3 to be updated, so those are held downrev for the time being.

The package dependencies I declare tend to be the explicit module dependencies (as shown by pip3 show). Occasionally I'll declare some or all of the optional dependencies in packaging, if the standard use case suggests it. And there's no obvious easy way to emulate the notion of extras in package dependencies. But that can be handled in package overlays, which is the safest way in any case.

Something else the checking can pick up is when a dependency is removed, which is something that can be easily missed.

Doing all the checking adds a little extra work up front, but should help remove one class of package breakage.

It seemed like a simple problem to fix The Trouble with Tribbles...

While a bit under the weather last week, I decided to try and fix what at first glance appears to be a simple problem:

need to ship the manpage with exa

Now, exa is a modern file lister, and the package on Tribblix doesn't ship a man page. The reason for that, it turns out, is that there isn't a man page in the source, but you can generate one.

To build the man page requires pandoc. OK, so how to get pandoc, which wasn't available on Tribblix? It's written in Haskell, and I did have a Haskell package.

Only my version of Haskell was a bit old, and wouldn't build pandoc. The build complains that it's too old and unsupported. You can't even build an old version of pandoc, which is a little peculiar.

Off to upgrade Haskell then. You need Haskell to build Haskell, and it has some specific requirements about precisely which versions of Haskell work. I wanted to get to 9.4, which is the last version of Haskell that builds using make (and I'll leave Hadrian for another day). You can't build Haskell 9.4 with 9.2 which it claims to be too new, you have to go back to 9.0.

Fortunately we do have some bootstrap kits for illumos available, so I pulled 9.0 from there, successfully built Haskell, then cabal, and finally pandoc.

Back to exa. At which point you notice that it's been deprecated and replaced by eza. (This is a snag with modern point tools. They can disappear on a whim.)

So let's build eza. At which point I find that the MSRV (Minimum Supported Rust Version) has been bumped to 1.70, and I only had 1.69. Another update required. Rust is actually quite simple to package, you can just download the stable version and package it.

After all this, exa still doesn't have a man page, because it's deprecated (if you run man exa you get something completely different from X.Org). But I did manage to upgrade Haskell and Cabal, I managed to package pandoc, I updated rust, and I added a replacement utility - eza - which does now come with a man page.

The Cloud Computer Oxide Computer Company Blog

Today we are announcing the general availability of the world’s first commercial cloud computer — along with our $44M Series A financing.

From the outset at Oxide, and as I outlined in my 2020 Stanford talk, we have had three core beliefs as a company:

  1. Cloud computing is the future of all computing infrastructure.

  2. The computer that runs the cloud should be able to be purchased and not merely rented.

  3. Building a cloud computer necessitates a rack-level approach — and the co-design of both hardware and software.

Of these beliefs, the first is not at all controversial: the agility, flexibility, and scalability of cloud computing have been indisputably essential for many of the services that we depend on in the modern economy.

The degree that the second belief is controversial, however, depends on who you are: for those that are already running on premises due to security, regulatory, economic, or latency reasons, it is self-evident that computers should be able to be purchased and not merely rented. But to others, this has been more of a revelation — and since we started Oxide, we have found more and more people realize that the rental-only model for the cloud is not sustainable. Friends love to tag us on links to VC thinkpieces, CTO rants, or analyst reports on industry trends — and we love people thinking of us, of course (even when being tagged for the dozenth time!) — but the only surprise is how surprising it continues to be for some folks.

The third belief — that the development of a cloud computer necessitates rack-scale design of both hardware and software — may seem iconoclastic to those who think only in terms of software, but it is in fact not controversial among technologists: as computing pioneer Alan Kay famously observed, "people who are really serious about software should make their own hardware." This is especially true in cloud computing, where the large public cloud companies have long ago come to the conclusion that they needed to be designing their own holistic systems. But if this isn’t controversial, why hasn’t there been a cloud computer before Oxide’s? First, because it’s big: to meaningfully build a cloud computer, one must break out of the shackles of the 1U or 2U server, and really think about the rack as the unit of design. Second, it hasn’t been done because it’s hard: co-designing hardware and software that spans compute, networking, and storage requires building an extraordinary team across disparate disciplines, coupling deep expertise with a strong sense of versatility, teamwork, and empathy. And the team isn’t enough by itself: it also needs courage, resilience, and (especially) time.

So the biggest question when we set out was not "is the market there?" or "is this the right way to do it?", but rather could we pull this off?

Pulling it off

We have indeed pulled it off — and it’s been a wild ride! While we have talked about the trek quite a bit on our podcast, Oxide and Friends (and specifically, Steve and I recently answered questions about the rack), our general availability is a good opportunity to reflect on some of the first impressions that the Oxide cloud computer has made upon those who have seen it.

"Where are all the boxes?"

The traditional rack-and-stack approach starts with a sea of boxes arriving with servers, racks, cabling, etc. This amounts to a literal kit car approach — and it starts with tedious, dusty, de-boxing. But the Oxide rack ships with everything installed and comes in just one box — a crate that is its own feat of engineering. All of this serves to dramatically reduce the latency from equipment arrival to power on and first provision — from weeks and months to days or even hours.

"Is it on?"

We knew at the outset that rack-level design would afford us the ability to change the geometry of compute sleds — that we would get higher density in the rack by trading horizontal real estate for vertical. We knew, too, that we were choosing to use 80mm fans for their ability to move more air much more efficiently — so much so that we leveraged our approach to the supply chain to partner with Sanyo Denki (our fan provider) to lower the minimum speed of the fans from 5K RPM to the 2K RPM that we needed. But adding it up, the Oxide rack has a surprising aesthetic attribute: it is whisper quiet. To those accustomed to screaming servers, this is so unexpected that when we were getting FCC compliance, the engineer running the test sheepishly asked us if we were sure the rack was on — when it was dissipating 15 kW! That the rack is quiet wasn’t really deliberate (and we are frankly much more interested in the often hidden power draw that blaring fan noise represents), but it does viscerally embody much of the Oxide differentiation with respect to both rack-level design and approach to the supply chain.

"Where are the cables?"

Anyone accustomed to a datacenter will note the missing mass of cold-aisle cabling that one typically sees at the front of a rack. But moving to the back of the rack reveals only a DC busbar and a tight, cabled backplane. This represents one of the bigger bets we made: we blindmated networking. This was mechanically tricky, but the payoff is huge: capacity can be added to the Oxide cloud computer simply by snapping in a new compute sled — nothing to be cabled whatsoever! This is a domain in which we have leapfrogged the hyperscalers, who (for their own legacy reasons) don’t do it this way. This can be jarring to veteran technologists. As one exclaimed upon seeing the rack last week, "I am both surprised and delighted!" (Or rather: a very profane variant of that sentiment.)

"You did your own switch too?!"

When we first started the company, one of our biggest technical quandaries was what to do about the switch. At some level, both paths seemed untenable: we knew from our own experience that integrating with third-party switches would lead to exactly the kind of integration pain for customers that we sought to alleviate — but it also seemed outrageously ambitious to do our own switch in addition to everything else we were doing. But as we have many times over the course of Oxide, we opted for the steeper path in the name of saving our customers grief, choosing to build our own switch. If it has to be said, getting it working isn’t easy! And of course, building the switch is insufficient: we also needed to build our own networking software — to say nothing of the management network required to be able to manage compute sleds when they’re powered off.

"Wait, that’s part of it?!"

It’s one thing to say that all of the software that one needs to operate the cloud computer is built in — but it’s another to actually see what that software includes. And for many, it’s seeing the Oxide web console (or its live demo!) that really drives the message home: yes, all of the software is included. And because the console implementation is built on the public API, everything that one can do in the console for the Oxide rack is also available via CLI and API — a concrete manifestation of our code-as-contract approach.

"And there’s no separate licensing?"

One common source of pain for users of on-prem infrastructure has been license management: financial pain due to over-paying and under-utilizing, and operational pain in the navigation of different license terms, different expiration dates, unpredictable dependencies, and uncertain vendor futures. From the beginning we knew that we wanted to deliver a delightful, integrated experience: we believe that cloud computers should come complete with all system software built-in, and with no additional licensing to manage or to pay for. Bug fixes and new features are always only an update away and do not require a multi-departmental discussion to determine value and budget.

"It’s all open source?"

While the software is an essential part of the Oxide cloud computer, what we sell is in fact the computer. As a champion of open source, this allows Oxide a particularly straightforward open source strategy: our software is all open. So you don’t need to worry about hinky open core models or relicensing surprises. And from a user perspective, you are assured levels of transparency that you don’t get in the public cloud — let alone the proprietary on-prem world.

Getting your own first impression

We’re really excited to have the first commercial cloud computer — and for it to be generally available! If you yourself are interested, we look forward to it making its first impression on you — reach out to us!