Automotive Predictions for 2017

January 7, 2017

 

I’m probably the least qualified person to comment on the automotive industry.  As a family, we only have one car, and we’re more likely to ride a bicycle or take a train than drive the car.  That said, we live in the Silicon Valley, and it seems like there has been a tectonic shift in the automotive zeitgeist from Detroit to Northern California.

Driverless cars will not quite be ready for prime-time.

I see Waymo (Google) cars on the street in Palo Alto pretty much on a daily basis at this point.  I saw the Uber cars in SF, and even the Otto semi-truck driving around.  Tesla is releasing a highly anticipated upgrade to Autopilot, and it seems like every day another auto manufacturer announces that they’re working on driverless cars.  So, we’re getting close.  Really close.  But I don’t think we’re quite there yet.

Even if Tesla completely nails Autopilot, there just aren’t enough Tesla’s on the road to bring driverless cars into the mainstream.  When I travel to places which are not the Silicon Valley everyone thinks driverless cars are a pipe dream.  And they have a point.  How do you make a car drive down a freeway in Alberta with drifting snow where you can barely see the lane markers?  I’m sure we’ll get there, but it’s a lot easier making your car work in the Bay Area where we barely even have “weather” than it is in a place with monsoons, dense fog, or real winter driving conditions.

Electric cars will still not take off, but probably will in 2018.

Speaking of Tesla, electric cars will still be niche in 2017.  The Tesla Model 3 is slated to be released late in the year, but given that Elon tends to be wildly optimistic with his release dates, I would be surprised if it came before 2018.  And really, Tesla is the only contender at this point.  There are a number of car companies that have electric cars (like BMW, GM, Ford, Fiat, VW, Nissan), but only Tesla has cracked the nut of making a cool looking, high performing car without the range anxiety.  If you can drive 4.5 hours at 70-75 mph you probably don’t care that it takes 45 minutes at a Super Charger to fill your car up to 80% capacity.

The other contenders just don’t have enough cool electric vehicles in the pipeline to make any kind of meaningful impact on the market this year.  They all know with ever increasing fuel efficiency standards that they need to get into the electric car business, but it takes time to develop the technology and build out a charging station network.  The Chevy Bolt may do well, but cheap fuel prices will hinder adoption in places which are not the Silicon Valley.  Oh, and just about everyone would rather have a Tesla Model 3 than a Bolt.

Hydrogen Fuel Cell cars will continue to stagnate.

Toyota and Honda seem to be doubling down on fuel cells, and if you’re a hydrocarbon manufacturer like Saudi Aramco you’re probably rooting for them to succeed.  But fuel cell cars aren’t going anywhere fast.  Refining oil is a dirty business, even if you get “free” hydrogen as a byproduct, and driving trucks of hydrogen around to a non-existent network of hydrogen “Gas” stations just isn’t going to happen any time soon, if ever.  On top of that, both companies seem to be taking their styling cues from Michael Bay, and I’m not sure most people, at least in North America, want to be driving in a car that looks like a Transformer.  As such, the Honda Clarity and the Toyota Mirai aren’t going to sell particularly well.

Uber and Lyft will continue to put a lot of pressure on the Taxi Industry, but will be more expensive.

Both Uber and Lyft need to start making money.  Pretty much no one wants to ride in a taxi at this point, and it feels like the only way taxi cab companies are going to survive is through legislation to preserve their monopolies and keep Uber and Lyft out of their markets.  The strategy for 2017 will be to continue to bunker down and hope that neither Uber or Lyft starts to make a profit. This is not a winning strategy.

There are some other head winds that ride hailing services are going to face in 2017.  Cities are starting to realize that their traditional transit services like busses are being eroded, and traffic caused by all of the additional cars is going to continue to fill downtown cores.  Expect cities like NY and SF to start imposing ride hailing surcharges which go towards paying for more mass transit.

 

On Docker Storage and NFS

October 18, 2016

We’re just about to release a new version of Docker Trusted Registry (DTR) at work, and I thought I’d highlight one of the cool features that we did for NFS storage.

In the past, we told people who wanted to use NFS to bind mount their NFS directories directly into a directory which was storing their registry volume.  This is really a pain to configure, particularly if you want to automate the process.  It requires creating the volume, inspecting it, mounting the NFS directory and then bind mounting that directory in the freshly inspected volume.  Oh, and also you need to make certain the NFS client options are correct if you want it to work in your HA cluster.  This is less than ideal.

With DTR 2.1, you can just specify a new flag during install or reconfigure called --nfs-storage-url which will automatically take care of everything for you.  The best part is when you join a new DTR replica to the cluster, the new node will automatically get the NFS mount point and everything will just work like magic.

So how does this work?

We’re taking use of a little known feature in the local storage volume plugin which allows you to back docker volumes with NFS.  Instead of the docker volume being backed with a local file system, the docker daemon instead mounts the configured NFS directory when the volume is attached to a container and that container starts up.

Let’s say we’ve created an NFS server called nfs-srvr and we’ve exported the directory “/exports”.  We’ll use this exported directory to back a new volume on a different docker host which we’ll call dckr1.  First, on our dckr1 host, let’s make certain that we can access the exported directory

root@dckr1:~# showmount -e nfs-srvr
Export list for nfs-srvr:
/exports *

Alright, that looks good, so let’s go ahead and create a new volume which is backed by the “/exports” directory.

root@dckr1:~# docker volume create --driver local --opt type=nfs \
--opt o=addr=<IP ADDRESS>,rw --opt device=:/exports --name=test-vol

And finally, let’s attach it to a new container running alpine

root@dckr1:~# docker run -it -v test-vol:/storage --rm alpine:3.4 sh
/ # ls /storage
...

And that’s pretty much it.  There are a couple of caveats to using it right now.  The first is that there is (as of Docker 1.12) a bug which you can’t use the host address of the NFS server, and instead have to use the IP address.  Also, make certain that you have the nfs client kernel library installed on the docker host (usually packaged in nfs-common or nfs-utils), otherwise your container will zombie when it starts because it won’t be able to back the volume correctly.

Party Parrot Time!!

October 12, 2016
docker run -it --rm pdevine/partyparrot

Need I say more?

SpaceX’s Interplanetary Transport System

October 4, 2016

OK, I think I sorta called this one a couple of years ago when SpaceX was experimenting with their Grasshopper demonstrator.  I _still_ don’t know how difficult it will be to refine methane on Mars, but at least Elon has a somewhat reasonable answer.  Here’s my post from Hacker News:

That said, the thing I’m struggling with the most right now is some kind of economic reason for why you would build the ITS.  From an “awesome” factor, I’m all on board, but Mars doesn’t really have anything that we don’t have on Earth (that we know of).  In terms of how to finance this beast though, I feel like maybe setting sight on something closer to home like orbiting space hotels or trips to the moon make a lot more economic sense.  With something closer to home you can have a lot of efficiency gains in terms of reusability which you don’t get by only having your ship work every two years.

ASCII Sprites with Golang

August 29, 2016

I’ve been trying to think of a way to spruce up the Docker container which we print on all of the t-shirts for Dockercon.  At Dockercon 16 in Seattle we told everyone to run this command:

docker run -it --rm hello-seattle

That displays some data about how containers work from a really superficial level.  Somewhat useful for novices, I guess, but it doesn’t exactly have a ton of pizzazz.

When I first started working at Docker a year and half ago, I converted some of my ancient Pyweek video games which I’ve written to run from Docker containers.  These tend to be a lot more interesting, but getting them running from the command line can be a huge pain in the neck.  You have to pass in the X11 socket, give it access to the sound card and do all sorts of nastiness which I wouldn’t want to print on the back of a t-shirt. This would be fine if the docker engine had support for showing graphical style applications, but for now, I’ve ruled it out for a quick and dirty demo.

Instead, I’ve been thinking about the limitations of ASCII and have been wondering if there were a way to use it with typical gaming primitives like sprites.  I wanted to keep the size of the container image really small, so that ruled out writing anything in python where I’d have to package up the interpreter.  Instead I went with a statically compiled golang application which meant it could live by itself inside of a container.

I’m not actually finished yet, but I managed to cobble together a bit of a demo.  It just bounces around a bunch of ASCII Docker whales, but even that is a step up from the old container.  Here’s a screenshot:

You can give it a try with this command:

docker run -it --rm pdevine/whale-test

Use ‘a’ to add more whales, and ‘z’ to take them away.  ‘q’ will quit out of the program.  Oh yeah, and I guess it goes without saying that you need Docker to run it.  I would imagine it works just fine with Docker for Mac and Docker for Windows, but I haven’t tried them out.

Anyway, it’s a start.  I did run in to some interesting problems writing it which maybe I’ll write up in a future post.

The Rift Has Arrived

July 30, 2016

It’s here!  It only took six (!) months to get here, but the Rift has finally arrived.  There was a little bit of a snafu in delivering it (I had to be at home to sign), but after waiting so long, what’s an extra weekend?

Initial impressions:

The Video Card Has Arrived

July 9, 2016

After attempting to get an nVidia GTX 1080 for a few months, I realized it probably wasn’t worth the extra $250, so opted for a GTX 1070 instead.  What’s been crazy about this rollout of GPUs is that it’s still next to impossible to get any of the cards, and the places that do have them in stock are mostly marking them up 80-100%.  Oh, and in the interim to fight off new, cheaper Radeon RX 480 cards from AMD (at half the performance), nVidia just released a paired down GTX 1060 for $250.

That said, the GTX 1070 should be fine for driving the Oculus Rift… when it finally gets here.  The shipping window for the Rift has come and gone and no sign of the thing is in sight, despite having pre-ordered at the beginning of January.  Apparently you can just go buy one at Best Buy though, which just seems wrong.  Good job, Facebook.

In terms of performance, it seems like you can crank all of the settings up in any game and play at 1920×1200 and they work great.  The problem here being that I don’t really have any of the new block buster triple-A titles which can really take advantage of the GPU.  I fired up Portal 2 and Rocket League and both were great at max settings.  The problem though is that both ran fine on the old GTX 660 (although not maxed out), and Portal 2 came out more than five years ago.  I also tried out Elite Dangerous, which I’m told is really awesome on the Rift.  At high settings the GTX 1070 didn’t seem to even break a sweat.  I’ll have to do some more bench marking and get concrete numbers.

I Love It When a Plan Comes Together

June 26, 2016

One of the last things to do to get this 4U beast finished was to figure out how to get some of the data off of the old Ultra-wide SCSI drives.  Originally the machine was used to house four separate 9GB virtual machines, and the whole thing was stuffed into a co-lo down on San Antonio road in Palo Alto back in the early 2000s.  I’d used one of the VMs as a general purpose Linux host which did double duty as a web server and an email server and the other ones were used by friends for pretty much the same purpose.  I really wanted to get some of the emails back though because I’d lost touch with a friend in Japan, and knew I had his email and snail mail addresses buried somewhere on one of the drives.

There were, however, several problems I needed to tackle to get the data back.  Not only did I not have a way of getting the Ultra SCSI drives to connect to anything since the old motherboard was dead, I also needed to figure out how to read the file system, since they were partitioned as VMFS2 although the virtual partitions were primarily ext2.

The SCSI problem I solved by buying a cheapo $45 LSI Logic card on Amazon Marketplace which was being sold as a tape backup adapter.  It looks like someone just plucked it out of an old HP machine, but it was cheap and did the trick.  Four of the five drives spun up just fine, although the years haven’t been particularly kind to them as the whine from them was pretty much unbearable.  I can only imagine how loud they would have been had I had left the old fans in as well.

To get the data off, I just “dd’d” each of the drives into files on the SSD drive, since I figure I’ll never use them again (also, does anyone need a slightly used LSI Logic Ultrawide SCSI card?).  I can attach the files as loopback devices in linux, however I still don’t have anything which will directly mount the VMFS2 partition.  I’m fairly certain ESXi can auto-convert from VMFS2 to VMFS3, but I’m not sure how I’d do that since I have no idea how to loopback mount each of the files.

Anyway, it’s a moot point.  I just used “strings” on the drive I wanted and was able to pull out my friend’s email address and it turns out he still has the same one after 12 years!  I’m going to call it mission accomplished.

There were still a few remaining things to do though before I could re-rack the machine.  I used one of the 5.25″ to 3.5″ adapters from the old drives to attach it to the 3.5″ to 2.5″ adapter I had bought for the SSD drive.  I also needed to get a cheap video card to hold me over until the GTX 660 was freed up from the gaming rig, so I bought a GT 730 since I figured I might as well get one which was quasi-useful.

Here are some pics…

WordPress with TLS

June 5, 2016

When I got WordPress working for the site, I ended up modifying the base container to make Let’s Encrypt work.  Originally I was thinking I would just set up a reverse proxy in front of it to do the TLS termination and then pass everything unencrypted between the WordPress container and the secure web container.

It turns out that the default configuration for WordPress has a php routine which attempts to figure out whether SSL is enabled or not, so a reverse proxy won’t actually work unless the connection between the front-end to the WordPress container is also encrypted.  That defeats the purpose though, since we’d have to also modify the WordPress container and set up our own self-signed certs.

Anyway, for the site I did end up modifying the WordPress container, but I also figured I’d fix things to make it so other people could use a reverse-proxy.  I need to bug some people on the team here at Docker to get the fix reviewed/accepted, but you can find the change here.

One last thing about Let’s Encrypt.  It’s a pretty awesome service, but the certs it issues expire every three months.  Unless you automate some way of refreshing the certs, you’re going to be in for a rude awakening every 90 days or so.  I ended up using certbot (EFF’s Let’s Encrypt service) and shoving a script into cron.daily which checks for a new cert and then if there is one, brings the website down and replaces it.  It looks kinda like this:

/usr/local/letsencrypt/certbot-auto renew –pre-hook docker-compose -f /path/to/docker-compose.yaml stop –post-hook=docker-compose -f /path/to/docker-compose.yaml up -d

It seems to work, but 90 days haven’t come up yet.  In theory it will replace the certs if it’s close to the 90 day cutoff, so I should probably check to see if it worked in mid-August.

Getting a GeForce 1080

May 28, 2016

So by accident this morning, my 8 year old’s new digital watch’s alarm went off at 6am.  Which also happened to be exactly the same time when the new nVidia GeForce 1080 GPUs went on sale.  Unfortunately, in the 5 minutes that it took me to fumble for my credit card and open the laptop, they’d completely sold out.  It’s still a few more weeks until the Oculus Rift shows up, but I’d really like to finish the 4U rackmount build soon.