OmniOS Community Edition r151028j, r151026aj, r151022ch OmniOS Community Edition

OmniOS Community Edition weekly releases for w/c 7th of January 2019 are now available. These are reboot updates for r151028 only.

In all releases:


The following information relates to release r151028 only

Updates (r151028 only)

  • Fix for ZFS performance degredation with some pools due frequent metaslab unload and re-load - OS-7151

  • bhyve updates including disk performance improvements.

  • Performance improvement in zone resource tracking on machines with many CPUs - illumos 9936

Fixes (r151028 only)

  • Workarounds for some hard disks and SSDs with buggy firmware relating to power conditions.

  • LX: fix to openat() in order to support newer systemd - see #331

  • bhyve/kvm brands did not support more than one disk when configured via zonecfg

Features (r151028 only)

  • Added library/security/openssl/preview to allow installation and testing of OpenSSL 1.1.1 - see blog post

For more details, see https://omniosce.org/releasenotes

Any problems or questions, please get in touch via the lobby or #omnios on Freenode

Previewing OpenSSL 1.1.1 on OmniOS r151028 OmniOS Community Edition

We’ve had a number of requests for updating the OpenSSL package in OmniOS r151028 to version 1.1.1, which have led to the creation of a new library/security/openssl/preview package. This allows updating to openssl 1.1.1 on r151028 ahead of the r151030 release (scheduled for May). We’re using this package on our own servers to enable TLS/1.3 for various services.

Installation

In order to make the switch, first ensure that you are on the latest version of the library/security/openssl package. It should be dated 20181214 or later.

omnios$ pkg list -v openssl
FMRI                                                                       IFO
pkg://omnios/library/security/openssl@1.1.0.10-151028.0:20181214T120225Z   i--

Then install the new preview package:

omnios$ openssl version
OpenSSL 1.1.0j  20 Nov 2018

omnios$ pfexec pkg install openssl/preview

omnios$ openssl version
OpenSSL 1.1.1a  20 Nov 2018

Reverting

Due to the way the packages are structured, reverting to version 1.1.0 requires an additional step over just removing the preview package:

omnios$ pfexec pkg uninstall openssl/preview

omnios$ pfexec pkg revert --tagged openssl-preview

omnios$ openssl version
OpenSSL 1.1.0j  20 Nov 2018

Wireguard - Android Road Warrior Nahum Shalman

Motivation

There are a lot of blog posts and wiki pages about how to set up Wireguard, but I still had to do a bunch of trial and error to come up with a configuration that worked for me. I have two goals:

  1. Secure all traffic from my Android phone for privacy / when on unsecured WiFi.
  2. Full access to my home network resources when away from home.

Caveats:

  • This does client key generation on the server, which is fine for this simple use case, but not a cryptographic best practice.
  • This sets up only a single client. If you need multiple clients, you'll have to get fancier.
  • As part of meeting my goals, ALL phone traffic goes through the VPN. That may not be your desired configuration.

Preparation

You will need:

  1. The public IP address of your router (or a DNS record that points to it)
  2. An open port on your router forwarded to wherever you run Wireguard
  3. The IP address of the DNS server your phone should use when connected to your home network (maybe your router, maybe not)
  4. Two addresses in a private subnet not used elsewhere on your home network

Implementation

I hosted all of this in a VM running Ubuntu 18.04.1.
At the end of this process, a QR Code is displayed to be scanned by the Wireguard Android app.
This script is run as root sudo bash -x simple-wireguard.sh:

#!/bin/bash -x

################ CHANGE THESE ##################
# Your Home IP or DNS record
WG_ADDRESS=wireguard.your.domain  
# UDP port forwarded to this machine
WG_PORT=12345  
# DNS entries for the client to use when on VPN
WG_DNS=192.168.1.1,8.8.8.8  
# Unused private IPs for this connection
WG_SERVER_INT=172.16.17.1  
WG_CLIENT_INT=172.16.17.2  
################################################

add-apt-repository -y ppa:wireguard/wireguard  
apt-get -y update  
apt-get -y install wireguard qrencode

# Enable Packet Forwarding
echo net.ipv4.ip_forward=1 > /etc/sysctl.d/wireguard.conf  
sysctl -p /etc/sysctl.d/wireguard.conf

# Generate Keys and Configurations
cd /etc/wireguard  
wg genkey | tee server.private | wg pubkey > server.public  
wg genkey | tee client.private | wg pubkey > client.public

# Server Configuration sets up NAT
cat >wg0.conf <<EOF  
[Interface]
Address = ${WG_SERVER_INT}  
SaveConfig = false  
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o net0 -j MASQUERADE  
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o net0 -j MASQUERADE  
ListenPort = ${WG_PORT}  
PrivateKey = $(cat server.private)

[Peer]
PublicKey = $(cat client.public)  
AllowedIPs = ${WG_CLIENT_INT}  
EOF

cat >client.conf <<EOF  
[Interface]
Address = ${WG_CLIENT_INT}  
PrivateKey = $(cat client.private)  
DNS = ${WG_DNS}

[Peer]
PublicKey = $(cat server.public)  
Endpoint = ${WG_ADDRESS}:${WG_PORT}  
AllowedIPs = 0.0.0.0/0  
PersistentKeepalive = 10  
EOF

# Turn on and enable for boot
systemctl start wg-quick@wg0  
systemctl enable wg-quick@wg0

# Show the QR Code
qrencode -t ansiutf8 < client.conf  

Better Tools

I wanted something simple and comprehensible that I built myself, but there are tools out there that build fancier configurations that are recommended by smart people in this space (I haven't used either):

References

A EULA in FOSS clothing? The Observation Deck

There was a tremendous amount of reaction to and discussion about my blog entry on the midlife crisis in open source. As part of this discussion on HN, Jay Kreps of Confluent took the time to write a detailed response — which he shortly thereafter elevated into a blog entry.

Let me be clear that I hold Jay in high regard, as both a software engineer and an entrepreneur — and I appreciate the time he took to write a thoughtful response. That said, there are aspects of his response that I found troubling enough to closely re-read the Confluent Community License — and that in turn has led me to a deeply disturbing realization about what is potentially going on here.

Here is what Jay said that I found troubling:

The book analogy is not accurate; for starters, copyright does not apply to physical books and intangibles like software or digital books in the same way.

Now, what Jay said is true to a degree in that (as with many different kind of expression), software has code specific to it; this can be found in 17 U.S.C. § 117. But the fact that Jay also made reference to digital books was odd; digital books really have nothing to do with software (or not any more so than any other kind of creative expression). That said, digital books and proprietary software do actually share one thing in common, though it’s horrifying: in both cases their creators have maintained that you don’t actually own the copy you paid for. That is, unlike a book, you don’t actually buy a copy of a digital book, you merely acquire a license to use their book under their terms. But how do they do this? Because when you access the digital book, you click “agree” on a license — an End User License Agreement (EULA) — that makes clear that you don’t actually own anything. The exact language varies; take (for example) VMware’s end user license agreement:

2.1 General License Grant. VMware grants to You a non-exclusive, non-transferable (except as set forth in Section 12.1 (Transfers; Assignment) license to use the Software and the Documentation during the period of the license and within the Territory, solely for Your internal business operations, and subject to the provisions of the Product Guide. Unless otherwise indicated in the Order, licenses granted to You will be perpetual, will be for use of object code only, and will commence on either delivery of the physical media or the date You are notified of availability for electronic download.

That’s a bit wordy and oblique; in this regard, Microsoft’s Windows 10 license is refreshingly blunt:

(2)(a) License. The software is licensed, not sold. Under this agreement, we grant you the right to install and run one instance of the software on your device (the licensed device), for use by one person at a time, so long as you comply with all the terms of this agreement.

That’s pretty concise: “The software is licensed, not sold.” So why do this at all? EULAs are an attempt to get out of copyright law — where the copyright owner is quite limited in the rights afforded to them as to how the content is consumed — and into contract law, where there are many fewer such limits. And EULAs have accordingly historically restricted (or tried to restrict) all sorts of uses like benchmarking, reverse engineering, running with competitive products (or, say, being used by a competitor to make competitive products), and so on.

Given the onerous restrictions, it is not surprising that EULAs are very controversial. They are also legally dubious: when you are forced to click through or (as it used to be back in the day) forced to unwrap a sealed envelope on which the EULA is printed to get to the actual media, it’s unclear how much you are actually “agreeing” to — and it may be considered a contract of adhesion. And this is just one of many legal objections to EULAs.

Suffice it to say, EULAs have long been considered open source poison, so with Jay’s frightening reference to EULA’d content, I went back to the Confluent Community License — and proceeded to kick myself for having missed it all on my first quick read. First, there’s this:

This Confluent Community License Agreement Version 1.0 (the “Agreement”) sets forth the terms on which Confluent, Inc. (“Confluent”) makes available certain software made available by Confluent under this Agreement (the “Software”). BY INSTALLING, DOWNLOADING, ACCESSING, USING OR DISTRIBUTING ANY OF THE SOFTWARE, YOU AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT. IF YOU DO NOT AGREE TO SUCH TERMS AND CONDITIONS, YOU MUST NOT USE THE SOFTWARE. IF YOU ARE RECEIVING THE SOFTWARE ON BEHALF OF A LEGAL ENTITY, YOU REPRESENT AND WARRANT THAT YOU HAVE THE ACTUAL AUTHORITY TO AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT ON BEHALF OF SUCH ENTITY.

You will notice that this looks nothing like any traditional source-based license — but it is exactly the kind of boilerplate that you find on EULAs, terms-of-service agreements, and other contracts that are being rammed down your throat. And then there’s this:

1.1 License. Subject to the terms and conditions of this Agreement, Confluent hereby grants to Licensee a non-exclusive, royalty-free, worldwide, non-transferable, non-sublicenseable license during the term of this Agreement to: (a) use the Software; (b) prepare modifications and derivative works of the Software; (c) distribute the Software (including without limitation in source code or object code form); and (d) reproduce copies of the Software (the “License”).

On the one hand looks like the opening of open source licenses like (say) the Apache Public License (albeit missing important words like “perpetual” and “irrevocable”), but the next two sentences are the difference that are the focus of the license:

Licensee is not granted the right to, and Licensee shall not, exercise the License for an Excluded Purpose. For purposes of this Agreement, “Excluded Purpose” means making available any software-as-a-service, platform-as-a-service, infrastructure-as-a-service or other similar online service that competes with Confluent products or services that provide the Software.

But how can you later tell me that I can’t use my copy of the software because it competes with a service that Confluent started to offer? Or is that copy not in fact mine? This is answered in section 3:

Confluent will retain all right, title, and interest in the Software, and all intellectual property rights therein.

Okay, so my copy of the software isn’t mine at all. On the one hand, this is (literally) proprietary software boilerplate — but I was given the source code and the right to modify it; how do I not own my copy? Again, proprietary software is built on the notion that — unlike the book you bought at the bookstore — you don’t own anything: rather, you license the copy that is in fact owned by the software company. And again, as it stands, this is dubious, and courts have ruled against “licensed, not sold” software. But how can a license explicitly allow me to modify the software and at the same time tell me that I don’t own the copy that I just modified?! And to be clear: I’m not asking who owns the copyright (that part is clear, as it is for open source) — I’m asking who owns the copy of the work that I have modified? How can one argue that I don’t own the copy of the software that I downloaded, modified and built myself?!

This prompts the following questions, which I also asked Jay via Twitter:

  1. If I git clone software covered under the Confluent Community License, who owns that copy of the software?
  2. Do you consider the Confluent Community License to be a contract?
  3. Do you consider the Confluent Community License to be a EULA?

To Confluent: please answer these questions, and put the answers in your FAQ. Again, I think it’s fine for you to be an open core company; just make this software proprietary and be done with it. (And don’t let yourself be troubled about the fact that it was once open source; there is ample precedent for reproprietarizing software.) What I object to (and what I think others object to) is trying to be at once open and proprietary; you must pick one.

To GitHub: Assuming that this is in fact a EULA, I think it is perilous to allow EULAs to sit in public repositories. It’s one thing to have one click through to accept a license (though again, that itself is dubious), but to say that a git clone is an implicit acceptance of a contract that happens to be sitting somewhere in the repository beggars belief. With efforts like choosealicense.com, GitHub has been a model in guiding projects with respect to licensing; it would be helpful for GitHub’s counsel to weigh in on their view of this new strain of source-available proprietary software and the degree to which it comes into conflict with GitHub’s own terms of service.

To foundations concerned with software liberties, including the Apache Foundation, the Linux Foundation, the Free Software Foundation, the Electronic Frontier Foundation, the Open Source Initiative, and the Software Freedom Conservancy: the open source community needs your legal review on this! I don’t think I’m being too alarmist when I say that this is potentially a dangerous new precedent being set; it would be very helpful to have your lawyers offer their perspectives on this, even if they disagree with one another. We seem to be in some terrible new era of frankenlicenses, where the worst of proprietary licenses are bolted on to the goodwill created by open source licenses; we need your legal voices before these creatures destroy the village!

Open source confronts its midlife crisis The Observation Deck

Midlife is tough: the idealism of youth has faded, as has inevitably some of its fitness and vigor. At the same time, the responsibilities of adulthood have grown: the kids that were such a fresh adventure when they were infants and toddlers are now grappling with their own transition into adulthood — and you try to remind yourself that the kids that you have sacrificed so much for probably don’t actually hate your guts, regardless of that post they just liked on the ‘gram. Making things more challenging, while you are navigating the turbulence of teenagers, your own parents are likely entering life’s twilight, needing help in new ways from their adult children. By midlife, in addition to the singular joys of life, you have also likely experienced its terrible sorrows: death, heartbreak, betrayal. Taken together, the fading of youth, the growth in responsibility and the endurance of misfortune can lead to cynicism or (worse) drastic and poorly thought-out choices. Add in a little fear of mortality and some existential dread, and you have the stuff of which midlife crises are made…

I raise this not because of my own adventures at midlife, but because it is clear to me that open source — now several decades old and fully adult — is going through its own midlife crisis. This has long been in the making: for years, I (and others) have been critical of service providers’ parastic relationship with open source, as cloud service providers turn open source software into a service offering without giving back to the communities upon which they implicitly depend. At the same time, open source has been (rightfully) entirely unsympathetic to the proprietary software models that have been burned to the ground — but also seemingly oblivious as to the larger economic waves that have buoyed them.

So it seemed like only a matter of time before the companies built around open source software would have to confront their own crisis of confidence: open source business models are really tough, selling software-as-a-service is one of the most natural of them, the cloud service providers are really good at it — and their commercial appetites seem boundless. And, like a new cherry red two-seater sports car next to a minivan in a suburban driveway, some open source companies are dealing with this crisis exceptionally poorly: they are trying to restrict the way that their open source software can be used. These companies want it both ways: they want the advantages of open source — the community, the positivity, the energy, the adoption, the downloads — but they also want to enjoy the fruits of proprietary software companies in software lock-in and its concomitant monopolistic rents. If this were entirely transparent (that is, if some bits were merely being made explicitly proprietary), it would be fine: we could accept these companies as essentially proprietary software companies, albeit with an open source loss-leader. But instead, these companies are trying to license their way into this self-contradictory world: continuing to claim to be entirely open source, but perverting the license under which portions of that source are available. Most gallingly, they are doing this by hijacking open source nomenclature. Of these, the laughably named commons clause is the worst offender (it is plainly designed to be confused with the purely virtuous creative commons), but others (including CockroachDB’s Community License, MongoDB’s Server Side Public License, and Confluent’s Community License) are little better. And in particular, as it apparently needs to be said: no, “community” is not the opposite of “open source” — please stop sullying its good name by attaching it to licenses that are deliberately not open source! But even if they were more aptly named (e.g. “the restricted clause” or “the controlled use license” or — perhaps most honest of all — “the please-don’t-put-me-out-of-business-during-the-next-reInvent-keynote clause”), these licenses suffer from a serious problem: they are almost certainly asserting rights that the copyright holder doesn’t in fact have.

If I sell you a book that I wrote, I can restrict your right to read it aloud for an audience, or sell a translation, or write a sequel; these restrictions are rights afforded the copyright holder. I cannot, however, tell you that you can’t put the book on the same bookshelf as that of my rival, or that you can’t read the book while flying a particular airline I dislike, or that you aren’t allowed to read the book and also work for a company that competes with mine. (Lest you think that last example absurd, that’s almost verbatim the language in the new Confluent Community (sic) License.) I personally think that none of these licenses would withstand a court challenge, but I also don’t think it will come to that: because the vendors behind these licenses will surely fear that they wouldn’t survive litigation, they will deliberately avoid inviting such challenges. In some ways, this netherworld is even worse, as the license becomes a vessel for unverifiable fear of arbitrary liability.

Legal dubiousness aside, as with that midlife hot rod, the licenses aren’t going to address the underlying problem. To be clear, the underlying problem is not the licensing, it’s that these companies don’t know how to make money — they want open source to be its own business model, and seeing that the cloud service providers have an entirely viable business model, they want a piece of the action. But as a result of these restrictive riders, one of two things will happen with respect to a cloud services provider that wants to build a service offering around the software:

  1. The cloud services provider will build their service not based on the software, but rather on another open source implementation that doesn’t suffer from the complication of a lurking company with brazenly proprietary ambitions.
  2. The cloud services provider will build their service on the software, but will use only the truly open source bits, reimplementing (and keeping proprietary) any of the surrounding software that they need.

In the first case, the victory is strictly pyrrhic: yes, the cloud services provider has been prevented from monetizing the software — but the software will now have less of the adoption that is the lifeblood of a thriving community. In the second case, there is no real advantage over the current state of affairs: the core software is still being used without the open source company being explicitly paid for it. Worse, the software and its community have been harmed: where one could previously appeal to the social contract of open source (namely, that cloud service providers have a social responsibility to contribute back to the projects upon which they depend), now there is little to motivate such reciprocity. Why should the cloud services provider contribute anything back to a company that has declared war on it? (Or, worse, implicitly accused it of malfeasance.) Indeed, as long as fights are being picked with them, cloud service providers will likely clutch their bug fixes in the open core as a differentiator, cackling to themselves over the gnarly race conditions that they have fixed of which the community is blissfully unaware. Is this in any way a desired end state?

So those are the two cases, and they are both essentially bad for the open source project. Now, one may notice that there is a choice missing, and for those open source companies that still harbor magical beliefs, let me put this to you as directly as possible: cloud services providers are emphatically not going to license your proprietary software. I mean, you knew that, right? The whole premise with your proprietary license is that you are finding that there is no way to compete with the operational dominance of the cloud services providers; did you really believe that those same dominant cloud services providers can’t simply reimplement your LDAP integration or whatever? The cloud services providers are currently reproprietarizing all of computing — they are making their own CPUs for crying out loud! — reimplementing the bits of your software that they need in the name of the service that their customers want (and will pay for!) won’t even move the needle in terms of their effort.

Worse than all of this (and the reason why this madness needs to stop): licenses that are vague with respect to permitted use are corporate toxin. Any company that has been through an acquisition can speak of the peril of the due diligence license audit: the acquiring entity is almost always deep pocketed and (not unrelatedly) risk averse; the last thing that any company wants is for a deal to go sideways because of concern over unbounded liability to some third-party knuckle-head. So companies that engage in license tomfoolery are doing worse than merely not solving their own problem: they are potentially poisoning the wellspring of their own community.

So what to do? Those of us who have been around for a while — who came up in the era of proprietary software and saw the merciless transition to open source software — know that there’s no way to cross back over the Rubicon. Open source software companies need to come to grips with that uncomfortable truth: their business model isn’t their community’s problem, and they should please stop trying to make it one. And while they’re at it, it would be great if they could please stop making outlandish threats about the demise of open source; they sound like shrieking proprietary software companies from the 1990s, warning that open source will be ridden with nefarious backdoors and unspecified legal liabilities. (Okay, yes, a confession: just as one’s first argument with their teenager is likely to give their own parents uncontrollable fits of smug snickering, those of us who came up in proprietary software may find companies decrying the economically devastating use of their open source software to be amusingly ironic — but our schadenfreude cups runneth over, so they can definitely stop now.)

So yes, these companies have a clear business problem: they need to find goods and services that people will exchange money for. There are many business models that are complementary with respect to open source, and some of the best open source software (and certainly the least complicated from a licensing drama perspective!) comes from companies that simply needed the software and open sourced it because they wanted to build a community around it. (There are many examples of this, but the outstanding Envoy and Jaeger both come to mind — the former from Lyft, the latter from Uber.) In this regard, open source is like a remote-friendly working policy: it’s something that you do because it makes economic and social sense; even as it’s very core to your business, its not a business model in and of itself.

That said, it is possible to build business models around the open source software that is a company’s expertise and passion! Even though the VC that led the last round wants to puke into a trashcan whenever they hear it, business models like “support”, “services” and “training” are entirely viable! (That’s the good news; the bad news is that they may not deliver the up-and-to-the-right growth that these companies may have promised in their pitch deck — and they may come at too low a margin to pay for large teams, lavish perks, or outsized exits.) And of course, making software available as a service is also an entirely viable business model — but I’m pretty sure they’ve heard about that one in the keynote.

As part of their quest for a business model, these companies should read Adam Jacob’s excellent blog entry on sustainable free and open source communities. Adam sees what I see (and Stephen O’Grady sees and Roman Shaposhnik sees), and he has taken a really positive action by starting the Sustainable Free and Open Source Communities project. This project has a lot to be said for it: it explicitly focuses on building community; it emphasizes social contracts; it seeks longevity for the open source artifacts; it shows the way to viable business models; it rejects copyright assignment to a corporate entity. Adam’s efforts can serve to clear our collective head, and to focus on what’s really important: the health of the communities around open source. By focusing on longevity, we can plainly see restrictive licensing as the death warrant that it is, shackling the fate of a community to that of a company. (Viz. after the company behind AGPL-licensed RethinkDB capsized, it took the Linux Foundation buying the assets and relicensing them to rescue the community.) Best of all, it’s written by someone who has built a business that has open source software at its heart. Adam has endured the challenges of the open core model, and is refreshingly frank about its economic and psychic tradeoffs. And if he doesn’t make it explicit, Adam’s fundamental optimism serves to remind us, too, that any perceived “danger” to open source is overblown: open source is going to endure, as no company is going to be able to repeal the economics of software. That said, as we collectively internalize that open source is not a business model on its own, we will likely see fewer VC-funded open source companies (though I’m honestly not sure that that’s a bad thing).

I don’t think that it’s an accident that Adam, Stephen, Roman and I see more or less the same thing and are more or less the same age: not only have we collectively experienced many sides of this, but we are at once young enough to still recall our own idealism, yet old enough to know that coercion never endures in the limit. In short, this too shall pass — and in the end, open source will survive its midlife questioning just as people in midlife get through theirs: by returning to its core values and by finding rejuvenation in its communities. Indeed, we can all find solace in the fact that while life is finite, our values and our communities survive us — and that our engagement with them is our most important legacy.

Golang sync.Cond vs. Channel... /dev/dump

The backstory here is that mostly I love the Go programming language.

But I've been very dismayed by certain statements from some of the core Go team members about topics that have significant ramification for my concurrent application design.  Specifically, bold statements to the effect that "channels" are the way to write concurrent programs, and deemphasizing condition variables.  (In one case, there is even a proposal to remove condition variables entirely from Go2!)

The Go Position


Essentially, the Go team believes very strongly in a design principle that can be stated thusly:

"Do not communicate by sharing memory; instead, share memory by communicating."

This design principle underlies the design of channels, which behave very much like UNIX pipes, although there are some very surprising semantics associated with channels, which I have found limiting over the years.  More on that below.

Certainly, if you can avoid having shared memory state, but instead pass your entire state between cooperating parties, this leads to a simpler, lock free (sort of -- channels have their own locks under the hood!) design.  When your work is easily expressed as a pattern of pipelines, this is a better design.

The Real World


The problem is that sometimes (frequently in the real world) your design cannot be expressed this way.   Imagine a game engine, dealing with events from the network,  multiple players, input sources, physics, modeling, etc.  One simple design is to use a single engine model, with a single go routine, and have events come in via many channels.  Then you have to create a giant select loop to consume events.  This is typical of large event driven systems.

There are some problems with this model.


  1.  Adding channels dynamically just isn't really possible, because you have a single hard coded select loop.  Which means you can't always cope with changes in the real world.   (For example, if you have a channel for inputs, what happens when someone plugs in a new controller?)
  2. Any processing that has to be done on your common state needs to be in that giant event loop.  For example, updates to lighting effects because of an in game event like a laser beam needs to know lots of things about the model -- the starting point of the laser beam, the position of any possible objects in the path of the laser, and so forth.  And then this can update the state model with things like whether the beam hit an object, causing a player kill, etc.
  3. Consequently, it is somewhere between difficult and impossible to really engage multiple CPU cores in this model.  (Modern multithreaded games may have an event loop, but they will also make heavy use of locks to access shared state, in order to permit physics calculations and such to be done in parallel with other tasks.)


So in the real world, we sometimes have to share memory still.

Limitations of Channels


There are some other specific problems with channels as well.


  • Closed channels cannot be closed again (panic if you do), and writing to a closed channel panics. 
  • This means that you cannot easily use go channels with multiple writers.  Instead, you have to orchestrate closing the channel with some other outside synchronization primitive, such as a mutex and flag, or a wait group.)  This semantic also means that close() is not idempotent.  That's a really unfortunate design choice.
  • It's not possible to broadcast to multiple readers simultaneously with a channel other than by closing it.  For example, if I am going to want to wake a bunch of readers simultaneously (such as to notify multiple client applications about a significant change in a global status), I have no easy way to do that.  I either need to have separate channels for each waiter, or I need to hack together something else (for example adding a mutex, and allocating a fresh replacement channel each time I need to do a broadcast.  The mutex has to be used so that waiters know to rescan for a changed channel, and to ensure that if there are multiple signalers, I don't wake them all.)
  • Channels are slow.  More correctly, select with multiple channels is slow.  This means that designs where I have multiple potential "wakers" (for different events) require the use of separate channels, with separate cases within a select statement.  For performance sensitive work, the cost of adding a single additional case to a select statement was found to be quite measurable.
There are other things about channels that are unfortunate (for example no way to peek, or to return an object to channel), but not necessarily fatal.

What does concern me is the false belief that I think the Go maintainers are expressing, that channels are a kind of panacea for concurrency problems.

Can you convert any program that uses shared state into one that uses channels instead?  Probably.

Would you want to?  No.  For many kinds of problems, the constructs you have to create to make this work, such as passing around channels of channels, allocating new channels for each operation, etc. are fundamentally harder to understand, less performant, and more fragile than a simpler design making use of a single mutex and a condition variable would be.

Others have written on this as well.

Channels Are Not A Universal Cure


It has been said before that the Go folks are guilty of ignoring the work that has been done in operating systems for the past several decades (or maybe rather of being guilty of NIH). I believe that the attempt to push channels as the solution over all others is another sign of this.  We (in the operating system development community) have ample experience using threads (true concurrency), mutexes, and condition variables to solve large numbers of problems with real concurrency for decades, and doing so scalably

It takes a lot of hubris for the Golang team to say we've all been doing it wrong the entire time.  Indeed, if you look for condition variables in the implementation of the standard Go APIs, you will find them.  Really, this is a tool in the toolbox, and a useful one, and I personally find it a bit insulting that the Go team seems to treat this as a tool with sharp edges with which I can't really be trusted.

I also think there is a recurring disease in our industry to try to find a single approach as a silver bullet for all problems -- and this is definitely a case in point.  Mature software engineers understand that there are many different problems, and different tools to solve them, and should be trusted to understand when a certain tool is or is not appropriate.

OmniOS Community Edition r151028f, r151026af, r151022cd OmniOS Community Edition

OmniOS Community Edition weekly releases for w/c 10th of December 2018 are now available. These are reboot updates if you have the system/bhyve package installed and not otherwise.

NSS has been updated in all supported releases to version 3.41 fixing CVE-2018-12404 and bringing in the latest set of CA certificates.

bhyve has been updated in r151026 and r151028 to fix CVE-2018-17160. Thanks to Joyent for providing a rapid fix.


For more details, see https://omniosce.org/releasenotes

Any problems or questions, please get in touch via the lobby or #omnios on Freenode

OmniOS Community Edition r151028e, r151026ae, r151022cc OmniOS Community Edition

OmniOS Community Edition weekly releases for w/c 3rd of December 2018 are now available. These are non-reboot updates.

Perl has been updated in all releases to fix a number of CVEs. Refer to the release notes for your particular OmniOS version for more details.

Mozilla nss and nspr are now at versions 3.40.1 and 4.20, fixing CVE-2018-12404 and bringing the latest set of CA certificates to all releases.


For more details, see https://omniosce.org/releasenotes

Any problems or questions, please get in touch via the lobby or #omnios on Freenode

Go modules, so much promise, so much busted /dev/dump

Folks who follow me may know that Go is one of my favorite programming languages.  The ethos of Go has historically been closer to that of C, but seems mostly to try to improve on the things that make C less than ideal for lots of projects.


One of the challenges that Go has always had is it's very weak support for versioning, dependency management, and vendoring.  The Go team's historic promise and premise (called the Go1 Promise) was that the latest version in any repo should always be preferred. This has a few ramifications:

  • No breaking changes permitted in a library, or package, ever.
  • The package should be "bug-free" at master.  (I.e. regression free.)
  • The package should live forever.

For small projects, these are noble goals, but over time it's been well demonstrated that this doesn't work. APIs just too often need to evolve (perhaps to correct earlier mistakes) in incompatible ways. Sometimes its easier to discard an older API than to update it to support new directions.

Various 3rd party solutions, such as gopkg.in, have been offered to deal with this, by providing some form of semantic versioning support.

Recently, go1.11 was released with an opt-in new feature called "modules".  The premise here is to provide a way for packages to manage dependencies, and to break away from the historic pain point of $GOPATH.

Unfortunately, with go modules, they have basically tossed the Go1 promise out the window. 

Packages that have a v2 in their import URL (like my mangos version 2 package) are assumed to have certain layouts, and are required to have a new go.mod module file to be importable in any project using modules.  This is a new, unannounced requirement, and it broke my project from being used with any other code that wants to use modules.  (As of right now this is still broken.)

At the moment, I'm believing that there is no way to correct my repository so that it will be importable by both old code, and new code, using the same import URL.  The "magical" handling of "v2" in the import path seems to preclude this.  (I suspect that I probably need different contradictory lines in my HTML file that I use to pass "go-imports", depending on whether someone is using the new style go modules, or the old style $GOPATH imports.)

The old way of looking at vendored code is no longer used.  (You can opt-in to it if you want still.)

It's entirely unclear how godoc is meant to operate the presence of modules.  I was trying to setup a new repo for a v2 that might be module safe, but I have no idea how to direct godoc at a specific branch.  Google and go help doc were unhelpful in explaining this.

This is all rather frustrating, because getting away from $GOPATH seems to be such a good thing.

At any rate, it seems that go's modules are not yet fully baked.  I hope that they figure out a way for existing packages to automatically be supported without requiring reorganizing repos.  (I realize that this is already true for most packages, but for some -- like my mangos/v2 package -- that doesn't seem to hold true).

OmniOS Community Edition r151028d OmniOS Community Edition

OmniOS Community Edition weekly release for w/c 26th of November 2018 is now available. This is a non-reboot update.

In r151028:

  • git has been updated to the latest release (2.19.2) fixing CVE-2018-19486

For more details, see https://omniosce.org/releasenotes

Any problems or questions, please get in touch via the lobby or #omnios on Freenode

OmniOS Community Edition r151028c, r151026ac, r151022ca OmniOS Community Edition

OmniOS Community Edition weekly releases for w/c 19th of November 2018 are now available. These are non-reboot updates.

In all releases:

  • openssl has been updated to the latest release (1.0.2q in all and 1.1.0j in r151028), fixing two low severity CVEs.
  • openjdk has been updated to 1.7.0_201-b00

In r151026 and r151028, pkg has been updated to fix a problem that could occur when removing a package if auto-be-naming is enabled.

r151028 has also received an update to the openssh package to work around a bug in VMware that could cause SSH traffic to be dropped, and a small update to system/header to add a file that was missing.

This week also sees a backport to screen in r151022 so that it is built against the more extensive ncurses library, enabling support for more terminal types.


For more details, see https://omniosce.org/releasenotes

Any problems or questions, please get in touch via the lobby or #omnios on Freenode