Way back when I was in college in the mid-90’s, I took a year off to professionally write video games. This was in Vancouver, BC, and I was writing (at least attempting to write) in 65816 assembly for the Super Nintendo at a company called Radical Entertainment. At the time I had just finished my second year of college, and I had taken a semester on the basics of machine computation. Everything seemed pretty foreign though after having only programmed at that time in Basic, Pascal, Modula-2, and C.
C is often described as a “low level language”, and being “close to the hardware”, which is definitely true, but at the time it felt a heck of a lot closer to Pascal than it did to writing in assembly. The Super Nintendo doesn’t have a lot of memory, either on the stack, or in ROM, and it wasn’t really possible to do things like have proper subroutines, at least on the code base that I was working on. Most of the code was a mess, and there wasn’t really any formal training; you were expected to just figure it out, and figure it out quickly. This isn’t easy to do when your development system is a cobbled together Super Nintendo with a development board hooked up through the parallel port to a 486. Assembling and loading the whole game could take 10-15 minutes, so you would quickly have to figure out tricks to speed things up like only loading portions of the game into memory.
Radical had had a lot of clunky titles for the SNES in the 90’s including Bebe’s Kids, Brett Hull Hockey, and Power Piggs of the Dark Age. There were a lot of great, super smart, talented people who worked there, but for whatever reason we just didn’t develop really great games. I suppose this mostly had to do with release pressure as well as the sheer difficulty of making magic come out of the Super Nintendo. It’s possible, but even with the blood, sweat, and tears we were pouring in, it was still awfully hard.
The code base I was working on, however, wasn’t terrible because I was developing the title from scratch, or using some clunky library routines, but was due to our team writing a port of two existing games to work with exercise bicycles. Specifically Speed Racer, and Mountain Bike Rally. Both titles, particularly MBR, were kind of a mess. Speed Racer had fantastic artwork, and a really great sound track, but it was a victim of its own ambition. It combined a Mode-7 (faux 3D) racing game with a 2D platformer, and neither part was properly tuned or worked well. The racing parts suffered from brutal yo-yo AI and kind of bizarre track layouts. The platformer had absolutely awful collision detection and unresponsive controls, probably due to having a 7 frame walk cycle animation. MBR was also a Mode-7 game, but it had ridiculously poor controls and made you (honest to god) repeatedly button mash the gamepad in order to pedal the mountain bike. Not exactly a super fun game.
There was something about using a real exercise bike that made both games work though. The feedback you got from the pedal tension increasing when you rode/drove up a Mode-7 hill and then decreased when you were coming back down the other side was really compelling. You were getting a workout, and actually having fun while doing it. The interaction with the bike pulled something out of each game and made up for all of the weird deficiencies and idiosyncrasies.
That said, the bikes, to my knowledge, never got sold. Spin classes weren’t really a thing yet, and there just wasn’t a market in 1995 for a $1000+ (in 1995 dollars no less) fitness bike that hooked up to a kids computer game system. On the flip side, if you can get your hands on one now, the bike/game is so extremely rare that it’s now (as of 2018) the fourth most expensive Super Nintendo game out there!
A friend of mine (and former co-worker) had been bugging me about adding hard drops into Tetromino since soft drops are kind of annoying, particularly when you’re at lower levels. I had actually been really reluctant to do that, because I was trying to base the game play as closely as possible on NES Tetris, but with the skin of the original Tetris. It turns out that the original Electronika 60 (nee PDP/11) version of Tetris did have hard drops, so I have finally relented.
My 10yo daughter is somewhat responsible for this. She pulled out the two Nintendo DS’s that my wife and I used to play on all the time, but which have sat in the cupboard for the better part of a decade. After changing out the batteries with some cheap Chinese knock-offs from Amazon, they’re both working great and I got some quality DS and GBA Tetris time. I realized pretty quickly how much it sucks not having hard drops.
If you want to check it out, try running
docker run -it --rm pdevine/tetromino. If you have an older version, you’ll probably have to re-pull the image. Alternatively, just check out the video below.
Way back in 2006 when the Iraq War was in full swing and Arrested Development was finishing up its first run, I made an agreement with my (at the time) girlfriend (now wife) that if I got rid of a bunch of my old PCs, I could get a brand spanking, shiny, new Mac Pro. The first of the new aluminum ones which was rocking four cores at 2.66 GHz, a big pile of RAM, and even a SATA hard drive.
That was twelve years ago, and although I had upgraded the machine before, it was looking a little long in the tooth. It was clear that I was going to need to do some major surgery on this beast if I was going to keep it going in the future.
A glance on Ebay shows they now sell for somewhere in the $80 to $150 range, which is a long way from the $2500+ (about $3100 in 2018 dollars) that I paid for it. This is for one of the first 64 bit machines in one of the nicest looking chassis of any system that I’ve ever owned. So what can we do with this machine?
The first time I upgraded it, in 2009, was to replace the original terrible nVidia card that was causing a lot of issues. I ended up getting an ATI Radeon HD 4870 with 512 MB which seemed to work well. In 2011, I’d upgraded the RAM to 16 GB, and added a stonking fast (for the time) 240 GB SSD drive. The onboard SATA controller is only SATA-2, which is great for spinning disks, but ideally one would want it to be faster for SSD. This particular SSD drive was only SATA-2, so that wasn’t really a problem. Somewhere along the way I also shoved in a couple of 750 GB SATA drives which I was holding copies of all of our home movies and photos.
So what could we upgrade in 2018?
On the operating system side, the machine was stuck at OSX Lion because Apple had decided to stop supporting the first gen Mac Pros in 2012. This had to do with the original EFI bootloader being 32 bit instead of 64 bit. There wasn’t really anything wrong with the bootloader though, and along the way the community had managed to patch it to make it still work. That meant I could upgrade all the way to OSX El Capitan.
Upgrading it, however, ended up being quite the chore. The 240 GB drive had quite a bit of data on it, and it required putting a copy of El Capitan on the disk before running through the hacky upgrade process. Somewhere along the way it ended up running at of disk space, but because of the hacked bootloader, it didn’t want to boot back into Lion. I ended up putting the machine in Target Disk mode and connected to it through my old MacBook Pro on the Firewire 800 port, but that took a couple of days due to having to order a new cable from Amazon. With some space freed up, I was able finish the installation and El Capitan works like a charm.
Unfortunately with macOS Sierra, High Sierra, and Mojave, Apple has yet again abandoned its old machines and now requires an extension to the Intel chipset called SSE 4.1. This is basically several new instruction codes that make things like floating point work more quickly. Unfortunately older processors, like the ones in the once swanky 2.66 GHz quad core Xeon CPUs, weren’t going to cut the mustard anymore. But no matter, is it even possible to upgrade the processors?
The answer is yes, you can upgrade them, but unfortunately not to the Penryn architecture which included the SSE 4.1 instructions. Even though there are processors that will fit into the same CPU sockets (LGA 771), they aren’t compatible. A pair of quad-core 3.0 GHz Clovertown Xeon processors can be had for $50 on Ebay though, so might as well upgrade to those (twice the cores!). To get them to work, you have to flash the firmware to make the machine think that it’s a Mac Pro 2,1 system, but flashing the firmware is pretty straight forward.
Changing the CPUs, on the other hand, was probably the most difficult part of the rebuild. This requires removing the memory bay as well as the CPU shroud. This isn’t as easy as it sounds because the memory bay has these extremely annoying little plastic clips which require you to depress all four of them simultaneously and work them out. Getting the CPU heatsinks off also turned into a bit of a chore. They use hex screws to hold them on, and unfortunately one of the 8 screws that hold the two heatsinks in place is extremely awkward to get at and requires a really long screwdriver. I ended up going over to Fry’s Electronics to buy a new screwdriver to get the last screw out, but unfortunately it didn’t fit either. In the end, it turned out that I had a really long Torx screwdriver lying around, and apparently you can use Torx bits with hex bolts, which I hadn’t realized before.
The Fry’s trip wasn’t a complete waste, because I also picked up some CPU thermal paste. With the heatsinks off and cleaned with some rubbing alcohol, it was time to put on some new thermal paste. Spreading the thermal paste is probably the most finicky part of the upgrade, but if you’ve done it a few times, it’s really not that difficult.
Changing most other things on this machine, unlike in newer Apple systems, is really easy. Adding a new drive into one of the four exposed drive bays is even trivially easy. Just pop the sled out, connect a 3.5″ SATA drive into the sled, and push it back in. You’ll need an adapter if you’re going to slot in a 2.5″ drive, which is what I had originally done when I added the SSD drive. One really nice thing about the original motherboard, is that it also included extra SATA connectors for hooking up additional drives in the 5.25″ bays. The system originally shipped with an Apple Super Drive (which would read and write both CDs and DVDs), and included room for a second drive in the bay, although it never was populated. The Super Drive uses IDE, so there is also a connector for it on the motherboard.
I actually have stacks of old 4TB Western Digital Red drives that are lying around from my old startup, so I figured why not upgrade the old 750 GB drives, and move the 240 GB SSD drive into the 5.25″ bay. I had some old 3.5″ to 5.25″ converters lying around from the 4U chassis build, so I attached a pair of those to the 3.5″ to 2.5″ converter that the SSD drive was already sitting in. It’s not super pretty, but you can’t really even see the 5.25″ bay once everything is put back together. The trickiest bit here was accessing the extra SATA-2 jacks and threading through the SATA cable. This requires removing the front fan assembly from the machine which first requires removing the plastic shroud over the CPUs as well as partially removing the memory bay. Since I had already done this to replace the CPUs it was easier to do both at the same time. The fan assembly is also really wedged into the system and required a bit of elbow grease and determination to get out, but once it is out, it’s pretty simple to thread the SATA cable from the 5.25″ bay and connect it to the extra jacks.
For the 4x4TB drives, I originally figured I could run it in software RAID-5 (just like you can in Linux), however I discovered after going through all this work that macOS doesn’t actually support software RAID-5. This has left me in a bit of a quandary. You can purchase SoftRAID from One World Computing which does support it without having to add any more hardware, or, I could just buy a hardware based RAID controller. So far I haven’t decided which way to go, however I’m leaning toward the hardware solution since in theory it might actually be a little cheaper than shelling out full retail price for SoftRAID. Apple did produce a Mac Pro RAID card back in the day, so it might be worth picking one of these up second hand, but I don’t know if they are any good or not.
Much like swapping out the drives in the four main bays, upgrading the system memory is also trivially easy to do. It doesn’t require anything like removing the memory bay itself. You just pull out the memory boards (there are two of them), push in the new DIMMs, and power the machine on again. The boards themselves have LEDs on them which will signal if there is a memory parity error or not. Originally Apple claimed that you could only put 16GB of RAM in the machine, but in reality it can easily address up to 32GB of RAM, and if you upgrade past OSX Lion, you can apparently even get 64GB of RAM to work in some configurations. I already had 16GB of RAM, so figured upgrading to 32GB of RAM would be sufficient. This actually ended up being the most expensive part of the upgrade, and the chips that I bought from Other World Computing were not exactly matched to each other, and I was initially a little worried because the parity error LED lit up. I ended up reseating the chips and the error went away, and so far it looks like the everything is behaving correctly.
One part of the upgrade did not go so well though. I ordered an ATI Radeon 5770 GPU off of Ebay to replace the older Radeon 4870, but the card unfortunately didn’t work and the machine wouldn’t boot when I powered it up. I also accidentally knocked off one of the little clips on the end of PCI-e slots that holds the card in place. This isn’t really a huge deal as the GPU still fits in and is pretty snug. I’m not going to use this machine for any heavy gaming, so I think I’ll just stick with the Radeon 4870 for now.
So was all this work worth it? It was really fun rebuilding the machine, but there are a few things that I’m not super thrilled about though, like not being able to get it to work with Sierra, High Sierra, or Mojave. The Xeon CPUs are also too old to support Extended Page Tables which means that it can’t run Docker Desktop, so I’ll have to see how well the newer versions of VMware Fusion work to be able to use Docker in a Linux VM. That said, I spent less than $200 on the upgrade, although I did have a lot of the parts on hand. Even without going all out like I did on some things like the 16TB of drives, and only putting 16 GB of RAM in it instead of 32 GB, you could build one of these machines for around $250, which is pretty decent if you just want a Mac to futz around with. For a little extra money though you could get a Mac Pro 4,1 is also very upgradeable, and can work with much more modern software. If you want a cheap Mac, that would be my recommendation.
TL;DR – A falling tetromino block game based upon NES Tetris in the style of the original 1984 Tetris by Alexey Pajitnov
Try it out with the command:
$ docker run -it --rm pdevine/tetromino
It should look something like:
This game was written because of an offhand remark my boss made to me in 1994. I had just finished up two years of college in Vancouver, British Columbia when I had taken a job at Radical Entertainment to write Super Nintendo games. I wasn’t a particularly great programmer at the time (which is arguably still true), however, I found writing games in 65816 Assembly (the language of choice for the Super Nintendo) extremely daunting. There was sparse documentation, few code samples, and the assembly that I had learned in college was fairly different than the code I was writing at Radical. That coupled with a slow build process on an Intel 486DX2-66 computer, and an even slower data transfer of the object code across a parallel port onto our homebuilt development kits, meant a lot of 80 hour work weeks.
I remember mentioning to my boss, Jack Rebbetoy, about how difficult I was finding things, and he made a remark about how we had engineers on-staff who could knock out a copy of Tetris in a weekend. This really blew my mind. We did have a lot of really talented engineers at Radical, but at the time I was having a hard time even getting a sprite to render correctly on the screen.
Fast forward 24 years, and I’m now working as a Product Manager at Docker. I still love writing games, but haven’t done it professionally since that job at Radical. The problem with using Docker to run and distribute games though, is that there isn’t an easy to use video subsystem. You can use it with X11 and using a remote display, or just passing in the X socket, but that’s not exactly easy to do. So given the limitation to really only use ASCII text, can you still create meaningful games and applications?
I ended up putting together a very rudimentary ASCII sprite library (github.com/pdevine/go-asciisprite), written in Go. Go was chosen since I wanted the compiled binary to be really small. I think the entire Tetromino docker image weighs in at about 2.8MB, which is shameful by 1994 standards (two whole floppy disks!), but pretty tiny in 2018. After putting together the sprite library, the choice for the first game to write seemed pretty natural; Could I actually write Tetris in a weekend?
The answer is a pretty definitive “yes”. Even though I’ve spent more than a weekend writing Tetromino Elektronika, the core of the game was written in about that amount of time, mostly spent riding Caltrain up and down the peninsula in the SF Bay Area. I tried to keep things as close as possible in terms of timings with the original Classic NES Tetris game (sorry, no hard drops!), and tried to make the look-and-feel similar to the original Tetris from 1984 on the old Soviet based Electronika 60.
So with that, hopefully you enjoy the game, and maybe even get inspired to create something with go-asciisprite. Code is available at github.com/pdevine/tetromino
A while back I posted about the
pdevine/whale-test container you can pull from Docker Hub. When I wrote that I had been trying to write a sprite library in golang which was based upon the
gizak/termuilibrary, but I kept running into weird issues with the way that it was buffering data before flushing it out. I guess that’s the problem with trying to graft something like a sprite library on top of a library designed for creating text widgets.
As time went by the whale-test code kind of bit rotted, and it looks like termui isn’t being supported anymore at all. Most of the posts in the github repo are asking if the project is dead, and none of the pull requests have been added in years (actually, one was, and that now causes a panic). So, instead of using termui, I decided to just use
nfs/termbox-go directly, which is the low level library that termui is based upon.
If you do a
docker run -it --rm pdevine/whale-test, it’s now written with the newly revamped, nascent sprite library called
pdevine/go-asciisprite. I’ve also added a new demo which you can run with
docker run -it --rm pdevine/pants which is very trouser related. I work for Docker after all.
All source code can be found at http://github.com/pdevine/go-asciisprite
I was helping my 10 year old out with her math homework this evening, and she had been given an interesting series of problems which made you find the difference between two numbers. Pretty straight forward stuff, except that they specified that each individual digit in each of the three separate numbers (the minuend, the subtrahend, and the difference) should be either even or odd.
This is a lot more challenging!
There was one particular question which everyone was struggling to find an answer for which looked something like this:
EEOE – OOEE = EEEE
‘E’ being an even digit, and ‘O’ being an odd digit. It turns out there are actually 9,000 answers to that, but how do you prove it? Like any normal (former) software engineer, I decided to just brute force it and hack together a solver. You can save this and run it by passing it three strings of E’s and O’s.
import sys import itertools odd = [1, 3, 5, 7, 9] even = [0, 2, 4, 6, 8] def compute(minuend, subtrahend, difference): def get_nums(template): nums = list() factor = 1 for x in reversed(template): if x.lower() == 'o': nums.append([n*factor for n in odd]) if x.lower() == 'e': nums.append([n*factor for n in even]) factor *= 10 return [sum(n) for n in itertools.product(*nums) if len(template) == len('%d' % sum(n))] num_a = get_nums(minuend) num_b = get_nums(subtrahend) nums = list() for x in num_a: for y in num_b: if x < y: continue diff = x - y if len('%d' % diff) != len(difference): continue found = True for c in difference: if c == 'o' and diff % 2 == 0: found = False break elif c == 'e' and diff % 2 == 1: found = False break diff /= 10 if found: nums.append([x, y, x - y]) for n in nums: print "%d - %d = %d" % (n, n, n) if __name__ == '__main__': compute(sys.argv, sys.argv, sys.argv)
SpaceX stole the show this past year, by launching an unprecedented 18 rockets. The last one was fittingly with another ten Iridium NEXT satellites, for a total of 40 Iridium satellites in 2017. They also had a couple of other firsts, including launching the first “flight proven” first stage for the SES-10 mission last March, as well as sending a reused Dragon and first stage to the International Space Station in December. The last Iridium flight also reused a first stage booster, so in total they reused five first stage boosters. What’s remarkable is how routine they’re making recovery and reuse look, and it’s going to go a long way into making space affordable.
With all of those successes, however, there still were a few misses. Neither the Falcon Heavy, nor the un-crewed Dragon 2 ended up flying this year. Elon is quoted saying that they didn’t realize how difficult it would be to get the Falcon Heavy to work, and that “strapping three Falcon 9 cores together is tougher than it sounds”. Who would have thought that rocket science would be difficult? The Dragon 2 also didn’t take flight this year, however, there’s plenty of action for it, along with the Falcon Heavy, on the launch manifest for 2018.
2017 also came and went without any paying passengers riding on a Virgin Galactic or a Blue Origin sub-orbital flight. Blue Origin got a little closer to that goal, doing a flight in December (just a week or two ago) which made it all the way up to 322,000 feet. They also did take a passenger up on the flight, named “Mannequin Skywalker”, who seemed pretty stoic about the entire affair.
I’m going to notch this prediction up as correct, but given the glacial pace of aerospace in general, it wasn’t that far of a stretch.
The trend to get rid of airplanes with greater than two engines continued through 2017, and as I’m writing this, there aren’t any left flying for any major airlines in North America. There are a few holdouts flying in Europe (I did end up flying an SAS A340 earlier last year to Copenhagen), and the Asian super carriers still have plenty, but most of these planes are being replaced by 777s, 787s, A330s, and A350s. Airbus said that if a new order doesn’t come in from Emirates that they’re going to cancel the A380 altogether. The old Hub-and-Spoke airline model is pretty much dead at this point, and thank goodness. Being able to fly smaller planes, more frequently, direct to your destination is always going to be more preferable than being stuck in a terminal waiting for a connecting flight.
It was a mixed year for the beleaguered Bombardier CSeries. Delta gave it its vote of confidence with an order for 75 planes, and options for 50 more. After which, Boeing turned around and had the US Commerce Department slap a 300% fine on the plane accusing Bombardier of dumping planes below cost on the US market. After which, Bombardier gave away 50.1% of the CSeries program to Airbus which is going to turn around and build planes in Alabama. Oh, and the US International Trade Commission may drop the tariff after all because Bombardier built a total of 17 planes last year (off of their 22 plane target), and hey, it’s now like Boeing isn’t heavily subsidized, right? Phew.
Meanwhile, Boeing keeps racking up more orders for the 737 MAX (another 175 are slated to go to flydubai. Southwest just started taking deliveries of its 737 MAX planes (it was the US launch partner) and Boeing delivered a total of 49 planes this year. Airbus delivered 134 A320s after delivering 68 in 2016.
Orders seem to be slowing down though, with the 737 MAX topping out at 420 new orders this last year, compared to 540 orders the year before. Airbus fared even worse in this regard with only 185 orders this year vs. 711 orders in 2016. Both programs have huge backlogs though, with Boeing needing to deliver 4,016 planes and Airbus backlogged a whopping 5,022.
So I’m not certain if I can fully claim this one. There definitely were more narrow bodied planes this year, but I’m baffled why Boeing decided to try and squeeze Bombardier. I guess they didn’t predict that Bombardier would counter with giving away half of the CSeries program. I’m not sure anyone could have predicted that.
I’ve recently found myself hooked on watching the Classic World Tetris Championships on YouTube. Here’s the CTWC 2016 Final between Jonas and Jeff. It’s completely mesmerizing, and it’s amazing to me that anyone can play Tetris consistently at that level.
The Championship is played on an NES, and it inspired me to dust off my old consoles and bust out a copy of Tetris. Sure, I could run it in OpenEMU, which is a fantastic piece of software, but there’s something to be said about running on a real console. The problem that I found though was two of my NES consoles didn’t want to boot (yay red blinky light) with either of the copies of Tetris that I have. I did finally manage to get one to boot on my Retro Duo, but it got me thinking that I wanted something better than to have to keep cleaning out all of my old cartridges.
One solution would have been to just break down and buy one of those bootleg multicarts that you see all over Amazon and Ebay. At least I wouldn’t have to keep switching out the cartridge every time I wanted to play another game. The problem from what I’ve read though is that many of them are of dubious quality, and ethically they seem even more dubious. I suppose I could also have bought one of the new NES Classic Editions, but they don’t even come with Tetris!
Instead, I opted for the Everdrive N8, which is essentially a Nintendo cartridge with an SD card reader on it that can play ROMs directly off of an SD card. It also adds some neat features like being able to save your games, and allows you to use Game Genie cheats. It’s essentially like OpenEMU except that you get to play on real hardware. Pretty slick!
Getting the Everdrive N8 to work was, admittedly, a bit of a challenge. The first problem is that it doesn’t ship with any kind of instructions. You need to prepare an SD card (one between 4GB and 32GB apparently. I had an extra 8GB SD card lying around), and it has to be formatted with FAT/FAT16/FAT32 with a copy of the Everdrive N8 OS unpacked on it.
For whatever reason the SD card reader on my ancient Macbook Pro didn’t initially want to let me write to the card. It turns out you need to set the silly plastic lock about halfway between locked and unlocked positions (!) in order to get diskutil to work correctly. I managed to repartition and format the SD card after a few dozen attempts, and found v16 of the NesOS on the Interwebs. My first attempt at loading the N8 on the Retro Duo ended up with some strange behaviour with it ultimately hanging on a screen saying “OS init”, but after reformatting the SD card yet again everything worked.
Using the N8 is pretty straight forward, but the various buttons used for selecting things is counter intuitive. Once you do select a ROM, load times with my SD card were blisteringly fast. It really only takes a half second or so for a game to load up. NES and Famicom ROMs are pretty tiny, so my 8GB SD card can easily hold my entire collection of games, plus probably every game ever created for the NES ever. I tried out about a dozen or so and all of them loaded without any issues. That’s pretty amazing given the flakiness that the NES can have with dirty cartridge contacts.
So why use the N8 over the new NES Classic Edition? Well, if you can manage to get a Classic Edition, it suffers from some real problems. The controller cables are short and you’re locked into the 30 pack-in games. The most egregious thing for me though is that it runs an emulator instead of running the ROMs on real hardware. For that, you either need one of those janky knockoff multi-carts, or something slick like the Everdrive N8.
If there’s one thing I’m even less qualified to predict than my automotive predictions, it has to be on what’s happening in the aerospace industry. But, that didn’t stop me before, so why stop now?
2017 is going to be a relatively boring year for space
I say this right after SpaceX has just launched for the first time since the September anomaly and there are now 10 Iridium Next satellites in orbit. Both the launch and landing on “Just Read the Instructions” went off without a hitch. SpaceX is definitely going to steal the show this year, and hopefully they’ll be able to resume, and even accelerate, their torrid pace of launches. The most exciting launch will be the oft delayed Falcon Heavy demonstration flight from the old Space Shuttle and Saturn V launch pad at the Kennedy Space Center, followed by the first “Crew Dragon” demonstration late this summer.
Neither the “Crew Dragon” or Boeing’s CST-100 spacecraft are going to carry passengers this year though. For that we’ve got to wait until 2018. In fact, 2018 looks pretty promising for aerospace in general, with the James Webb and the wheel-less Mars InSight lander.
The big question marks are Virgin Galactic and Blue Origin. Virgin Galactic has resumed testing of SpaceShip 2, but I’m guessing that 2017 is going to come and go before any paying passengers get to ride on it. As for Blue Origin? Who knows. They probably have the best shot of launching passengers on sub-orbital flights this year, but Jeff Bezos is so tight lipped that it’s difficult to say with any real confidence.
We’ll see a lot fewer 747s, A340s and A380s
Over in the aeronautical side of the things, airlines will continue to downsize from their super-jumbos in favour of 777s, 787s, A330s and A350s. There’s pretty much no reason to fly with more than two engines at this point, and three engine jumbos like the MD-11 and L-1011 are already ancient history. That leaves Boeing with the “Queen of the Skies” and Airbus stuck with the A340 and the A380. Even with fuel prices staying at relatively low levels, it still makes more economic sense to fly longer haul routes with smaller planes than it does to fly a plane with twice as many engines. That includes the price of additional crew, which is still cheaper than all of the fuel which gets consumed.
United already said it’d accelerate phasing out of their 747s this year, and Delta has said the same thing. That means there won’t be a single US carrier flying 747s by the end 2017. Meanwhile Airbus is souring on the A380neo (new engine option) which means there aren’t going to be a lot of fuel savings any time soon.
… and a lot more narrow bodies
For smaller planes, it’s still unclear whether the Bombardier CSeries will take off. Embraer is taking their fight against Bombardier to the WTO, however it’s not like Bombardier is making any money at this point. The real turning point could be when Delta starts to fly the CSeries and if its cost competitive and reliable. That’s not expected until spring of 2018, so it could be another long year for Bombardier.
The 737 MAX on the other hand will see its first delivery this year, and presumably we’ll start seeing them in the US shortly afterward, starting with Southwest. The MAX isn’t a new airframe though, and it just replaces the engines to be something around 14% more efficient. That’s not much different than the A320neo which Airbus introduced last year.