Garmin Edge 500 with Heart Rate and Cadence

I’ve had my first week with my new Garmin Edge 500 with cadence sensor and premium heart rate monitor strap, so it’s time for a review. I bought it on Wiggle for about JPY 24,500 ($274).

Around the time I bought the Garmin Edge 500, the new Edge 510 came out. It adds a touch screen, wireless connectivity to a smartphone and various nifty new features, but is also more expensive, so I went for the existing 500.

I switched to the Garmin after more than a year and over 8,000 km of GPS logging using Android phones, mostly my Google Nexus S. Here are my first impressions (the cadence sensor in the bundled set is installed on my son’s bike for use with his 500, so it’s not part of this review):

  1. I really love being able to use a heart rate strap and it’s nice to be able to see the HR figure without having to push a button (daylight permitting). I can ride at a consistent effort level, avoiding both effort too light to build stamina and extreme effort that would lead to premature exhaustion. If money were no object, a power meter would work best (which the Garmin supports). A heart rate monitor is an inexpensive alternative that works for most cyclists wanting to improve their performance.
  2. Because of its barometric altimeter the elevation totals are much more meaningful on the Garmin than on the GPS-only phone, where they may be exaggerated by a factor of 2 to 3. Current altitude data on the Android is OK, but small variations add up too much and grades on climbs and descents may be overestimated.
  3. I love the 90 degree turn quick attach / quick release of the Garmin. It feels both secure and convenient. It is more confidence inspiring than the Minoura iH-100-S phone holder for my Android, which is generally reliable, but not 100% bulletproof. Even after using a bumper for the phone, which has improved the grip of the holder, I’ve had a few instances where on bumpy roads only the USB cable attached to the phone saved my day. I would never entrust my $300 phone to the Minoura without some kind of backup method of attachment, while I feel safe about the Garmin’s mode of attachment.
  4. Importing the rides into Strava or Garmin Connect after the ride is really easy. I just connect the Garmin to the USB cable of the PC and click “upload” on the website in the browser and the browser plugin finds the fresh tracks and uploads them. Assigning a name is marginally easier with a real keyboard than a soft keyboard on the Android Strava app. With the smartphone I could also upload rides while I’m on the road, but why do that if I’ll still add more kilometres until I get back home? That would only be a benefit on a multi day tour without laptop.
  5. One drawback of the Garmin is lack of direct Linux support. My son runs Ubuntu on his laptop, while Garmin only officially supports Microsoft Windows and Mac OS X, so he asked me to upload his activities on one of my PCs. There’s a workable solution though. When you connect the Garmin to a USB port on an Ubuntu machine, it gets mounted as a removable volume named “GARMIN”. In there is a folder called Garmin, with another folder Activities inside which contains all logged rides as .fit files. Copy those to your hard disk and then upload them manually from a browser (Strava supports .gpx, .tcx, .json, and .fit files).
  6. When leaving the house, both the Garmin and the Android take a short while to lock onto the satellites and the Android seems to have something of an edge (excuse the pun) over the Garmin, which does seem to take its time. Maybe that’s because the Android pull pull satellite position data off the web, while the Garmin can only use whatever data it captured before. In one unscientific test, I took my Android and my Garmin outside in the morning. The Android had a satellite lock in 15 seconds while the Garmin took a more leisurely 44 seconds. This is a minor issue to me compared to the next one, GPS precision.
  7. While I have seen better GPS results on some rides from the Garmin than the Android, switching from the latter to the former has not been a dramatic improvement. I think their results are still in the same class, i.e. far from perfect, especially in built-up areas. Neither is like my car GPS, which is pretty solid. Both my son and I have been riding on Strava segments in Tokyo, expecting to be ranked but found the segment didn’t show up because the plotted route was slightly off to the side, so the segment start or end didn’t match up.
  8. Having temperature data on the Garmin is nice, but not really important to me. Unlike heart rate and cadence it’s not feedback that you can use instantly in how you cycle. Your body is a temperature sensor anyway and how you dress is at least as important as the absolute temperature.
  9. The Garmin 500 battery is supposed to last “up to 18 hours”, which would cover me on everything but 300 km and longer brevets, but on any significant rides I tend to take my Android phone, which I use for Google Maps, e-mail, SMS and yes, even the occasional phone call. Using an external 8,000 mAh battery for the Android, battery life has not really been an issue. The same battery will charge either device (one at a time), provided I take both a mini and micro USB cable with me.

Summary

If my Android had an ANT+ chip or supported BTLE (BT 4.0) for using a heart rate monitor as well as a barometric altimeter, then it would still be my first choice for logging bike rides. Given the limitations of my phone and the reasonable price of the Garmin Edge 500 I am very happy with my purchase.

Ultegra Di2 versus Alfine 11 Di2

I just saw a post by Bike Friday head designer Rob English about Ultegra Di2 becoming available on their high end folding bikes. Di2 is Shimano’s electronic gear shift system. The first version appeared on their high end Dura Ace group set used by professional racers. The latest Ultegra Di2 is more affordable, but it’s still not cheap (about US$2,300 vs. $4,000 for the group set). A few months ago my son had the chance to take the Di2-equipped 700C bike of my friend Eric of the GS Astuto team for a spin and simply loved it. Gear changes were so quick and precise and the front derailleur adjusts as you switch through the cassette at the rear. No manual trimming is ever required to avoid chain rub. You never mess up any gear shifts, even under load. Once set up the system remains precisely tuned, with no maintenance required for months.

Some cyclists are skeptical about electronic shifting because it involves batteries. That’s somewhat understandable, since as users of mobile phones and digital cameras we have all experienced running out of charge, often when it’s most inconvenient. However, from what I hear one charge of the Di2 battery should last you about 1,000 km of cycling, far further than the average car fuel tank. A battery that lasts weeks and months should be good enough for most people. What’s more, even if you do run out of power you first get ample warning. For instance, the front derailleur stops working before you will run out of juice for the more important rear derailleur. The ideal setup of course would be electronic shifting combined with a dynamo hub. You would get all the benefits of an electronic system with the self-sufficiency of an all mechanical setup.

After Ultegra Di2, some people were hoping for Shimano to announce a 105 version of Di2 as the next step of digital shifting for the masses (105 is the next road group below Dura Ace and Ultegra), but instead Shimano chose to announce Alfine 11 Di2 (Shimano SG-S705), an electronic version of its 11 speed internal geared hub (IGH), the mechanical version of which had been launched in 2010.

Alfine 11 Di2 addresses the vast city and commuter market, but it should also be interesting for road, touring and mountain bikes. IGHs do without a vulnerable derailleur and require less maintenance. The Alfine 11 gear range (low to high) of 1:4.09 is wider than the 1:3.74 spread of a compact crank (50/34) with a 11-28 cassette. Unlike the mechanical version, Ultegra Di2 does not yet support triple cranks and bigger cassettes can only be done via non-standard hacks.

I recently rode some insanely steep hills west of Tokyo (18-20%), which I could not possibly have managed without the lowest gear of my triple cranks setup (50/39/30 and 11-28 with 20″ 451 wheels). Therefore I think I would be more interested in Alfine 11 Di2, even if Ultegra Di2 will be more appealing to road bike purists.

What is Skipity and why is it in FireFox?

I mostly use Google Chrome these days, but still have Mozilla FireFox installed as a browser, which used to be my standard browser before I switched to Chrome.

Today I launched FireFox again and was surprised to see something called Skipity in its toolbar. Furthermore, when I tried to go to my browser custom start page (a page with my most useful links) it took me to the Skipity website. A Google search showed that Skipity comes as part of an add-on called “Download Youtube video 12.0”. I removed that add-on, restarted FireFox, opened the URL I previously had as the browser start page and went to “Tools > Options > General > Startup” to select that URL as the start page again.

Any software that changes the start page of the browser without your consent should be permanently banned from your computer!

Using Sanyo Eneloop Ni-MH AA batteries to power your mobile phone

About two years ago I started using Sanyo’s rechargeable eneloop batteries. These relatively inexpensive Nickel-Metal Hydride (Ni-MH) cells are available in both AA (単3形) and AAA (単4形) sizes. They are low self-discharge cells that keep their charge for months when not in use. I’ve bought boxes of 8 cells of either type, for use in flash lights, bike blinkies, helmet lights and Bluetooth keyboards.

They are initially more expensive to buy than regular alkaline (primary) cells, but you only need to re-use them about three times before they work out much cheaper than primary cells, while you can actually recharge them hundreds of times before they start losing significant capacity.

Here are some nice gadgets that will take them, which I found sold in convenience stores here Japan.

These little cases (by alicty.co.jp) take power from two or three regular alkaline AA or Ni-MH AA cells and provide a USB port for powering mobile phones and other small gadgets with a USB power cable. As you would expect, the three cell version is slightly more powerful, looking to my Google Samsung Nexus S as an AC charger (i.e. it provides more than 500 mA). For the two cell version, the phone shows “charging (USB)” as the status, i.e. it can draw up to 500 mA. The two cell version has a USB-A socket (female) for generic USB cables while the three cell version comes with an integrated micro USB (male) cable. A very similar concept has been around for a while as the MintyBoost.

The nice thing is, if you carry enough pre-charged eneloop cells with you, you can swap cells as needed and have virtually unlimited power. You could even buy primary cells to top up if desperate (one set came bundled with each device), but they would end up costing you more than re-usable eneloop cells in the long term. I’ll carry some Ni-MH cells as spares on long bike trips or hikes, which could come in handy with these little cases.

UPDATE 2012-04-04: I also tried using this adapter with alkaline (primary = non-rechargable) AA cells and it goes through them quite rapidly. Alkaline AA batteries have a notoriously poor performance in high drain applications because of their high internal resistance. You’re much better off sticking with Ni-MH batteries such as Sanyo Eneloop!

It says on the pack that a set of 3 AAs will boost the charge state of a smartphone battery by 30-40%, i.e. it would take you about 3 sets (9 cells) to fully recharge an empty battery. Or put another way, if the phone lasts 5 hours on one charge doing whatever you’re doing, you will consume a set of fresh AAs every 100 minutes to keep it topped up. To provide 500 mA at 5 V (2.5 W) on the USB connector at 80% efficiency would draw 3 W from the batteries, or 700 mA at 4.5 V (3 x 1.5 V). At that kind of load, an alkaline battery might only supply a quarter of its rated capacity, which is normally measured at a much smaller load (which is OK for alarm clocks, TV remote controls, etc. but not high powered electronics like digital cameras or smart phones).

strcpy data corruption on Core i7 with Linux 64bit

If you’re C programmer, does this code look OK to you?

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main(int argc, char* argv[])
{
  char szBuffer[80];
  strcpy(szBuffer, "abcdefghijklmnopqrstuvwxyz");
  printf("Before: %s\n", szBuffer);
  strcpy(szBuffer, szBuffer+2);
  printf(" After: **%s\n", szBuffer);

  return 0;
}

Here is the output on my server, a Core i7 running Debian 6:

Before: abcdefghijklmnopqrstuvwxyz
After: **cdefghijklmnopqrstuvwzyz

What the program does is dropping two characters from a text string in a buffer, moving the rest of it left by two characters. You expect the moved characters to stay in sequence, but if you compare the last three characters of the output you see that that isn’t the case. The ‘x’ has been obliterated by a duplicate ‘z’. The code is broken.

It’s a bug, and not a straightforward one, as I’ll explain.

I first came across it a couple of months ago, as I was moving some code of mine from an Athlon 64 Linux server to a new Intel Core i7 server. Subsequently I observed strange corruption in data it produced. I tracked it down to strcpy() calls that looked perfectly innocent to me, but when I recoded them as in-line loops doing the same job the bug went away.

Yesterday I came across the same problem on a CentOS 6 server (also a Core i7, x86_64) and figured out what the problem really was.

Most C programmers are aware that overlapping block moves using strcpy or memcpy can cause problems, but assume they’re OK as long as the destination lies outside (e.g. below) the source block. If you read the small print in the strcpy documentation, it warns that results for overlapping moves are unpredicable, but most of us don’t take that at face value and think we’ll get away with it as long as we observe the above caveat.

That is no longer the case with the current version of the GNU C compiler on 64-bit Linux and the latest CPUs. The current strcpy implementation uses super-fast SSE block operation that only reliably work as expected if the source and destination don’t overlap at all. Depending on alignment and block length they may still work in some cases, but you can’t rely on it any more. The same caveat theoretically applies to memcpy (which is subject to the same warnings and technically very similar), though I haven’t observed the problem with it yet.

If you do need to remove characters from the middle of a NUL terminated char array, instead of strcpy use your own function based on the memmove and strlen library functions, for example something like this:

void myStrCpy(char* d, const char* s)
{
  memmove(d, s, strlen(s)+1);
}
...
  char szBuffer[80];
...
  // remove n characters i characters into the buffer:
  myStrCpy(szBuffer+i, szBuffer+i+n);

I don’t know how much existing code the “optimzed” strcpy library function broke in the name of performance, but I imagine there are many programmers out there that got caught by it like I was.

See also:

APC Smart-UPS 750 with Ubuntu 11.4


I finally got myself an uninterruptible power supply (UPS). The infamous August heat in Tokyo has been pushing power use including air conditioning close to the limit of what Tepco can supply: All 10 reactors in Fukushima Daiichi and Daini are either destroyed or shut down. In total about 2/3 of the nuclear power capacity in Japan is currently offline. That gave me one more reason to shop for a UPS. The other was that I have a Linux server and Linux file systems tend to use a lot of write buffering, which can make a mess of a hard disk partition if power is lost before the data is fully written to disk.

A friend recommended APC as a brand. Researching which of their ranges was suitable for my server, it appeared that some advanced PC power supplies with Power Factor Correction (PFC) have problems with the consumer level APC models, which output a square wave when in battery power mode. The more business-oriented models output something closer to a sine wave, the shape of power supplied by your utility company. Because of that I went for the APC Smart-UPS range. The server draws less than 50W, so there wasn’t really much point going for the beefiest models. That’s how I picked the APC Smart-UPS 750 with 500W of output power. My exact model is the SUA750JB, the 100 V, 50/60 Hz model for Japan. If you live in North America, Europe, Australia or New Zealand you’ll use either the 120V or 230V models. There’s also a 1000W (1500 VA) model, the APC Smart-UPS 1500, which features a larger capacity battery and larger power output.

The UPS arrived within two days. There’s a safety plug at the back of the unit which when open disconnects the battery for transport, which you’ll have to connect to make it work. The internal lead-acid batteries appeared to come fully charged. They are fully sealed units that are supposed to be leak-proof.

My unit came with a manual in Japanese and English but no software of any kind. It came with a serial cable, which I don’t have any use for, as virtually all modern PCs no longer have legacy serial and parallel ports. What I needed was a USB cable with one type A and one type B connector and that was not included. I am not sure why APC bundles the serial cable and not the USB cable. For an item in this price range, the USB cable should not be extra. However, I had a couple of suitable cables lying around from USB hard disks and flat screen monitors with built-in USB hubs, so it wasn’t a problem. You may want to check if the unit you’re buying comes bundled with the USB cable or if you may need to get one separately.

Once you connect the UPS to the PC using a USB cable, you should be able to verify that Linux has detected the device. Run:

me@ubuntu-pc:~$ lsusb
Bus 003 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply

The software I’m using for linux is the apcupsd daemon, whose source code is available on SourceForge. I compiled it this way:

./configure --enable-usb
make
sudo make install

To be able to run it you need to set some config files. In /etc/default/apcupsd set

ISCONFIGURED=yes

In /etc/apcupsd/apcupsd.conf:

UPSCABLE usb (default: smart)
UPSTYPE usb (default: apcsmart)
DEVICE (default: /dev/ttyS0)

Stop and start the daemon and you’re in business:

/etc/init.d/apcupsd stop
/etc/init.d/apcupsd start

While the daemon is stopped you can also run apctest to run various tests on the unit.

Test that the UPS works, by pulling the power cable from the wall socket. The UPS should raise an audible alarm and its LEDs should switch from the sine wave symbol to the sine wave with battery poles symbol. Also be aware that UPS batteries do not last forever, especially if they’re used in a hot environment. You may get anywhere between 2 to 4 years of use out of them. Replacement batteries from third parties are usually available for much less than original parts from the UPS manufacturer.

Gateway M-6750 with Intel Ultimate-N 6300 under Ubuntu and Vista

My Gateway M-6750 laptop uses a Marvell MC85 wireless card, for which there is no native Linux driver. Previously I got it working with Ubuntu 9.10 using an NDIS driver for Windows XP. Recently I installed Ubuntu 11.04 from scratch on this machine (i.e. wiping the Linux ext4 partition) and consequently lost wireless access again.

Instead of trying to locate, extract and install the XP NDIS driver again, this time I decided to solve the problem in hardware. Intel’s network hardware has good Linux support. I ordered an Intel Centrino Ultimate-N 6300 half-size mini PCIE networking card, which cost me about $35. Here is how I installed it.

Here is a picture of the bottom of the laptop. Remove the three screws on the cover closest to you (the one with a hard disk icon and “miniPCI” written on it) and open the cover. Use a non-magnetic screwdriver because the hard disk is under that cover too. As a matter of caution, use only non-magnetic tools near hard disks or risk losing your data.

Remove the screw that holds the MC85 card in the mini PCI slot on the right. Remove the network card. Carefully unplug the three antenna wires. Connect those wires to the corresponding locations on the Intel card. Insert the Intel card into the socket on the left. Note: I had first tried the Intel card in the socket on the right but in that case it always behaved as if the Wireless On/Off switch was in the Off position, regardless of its actual state. Even rebooting didn’t make it recognize the switch state. The left mini PCI socket did not have this problem 🙂

Because the Intel card is a half size card you will also need a half size to full size miniPCI adapter to be able to screw down the card to secure it. Instead I simply used a stiff piece of cardboard (an old business card) to hold it in place and closed the cover again. If you take your laptop PC on road a lot I recommend doing it properly (don’t sue me if the cardboard trick melts your motherboard or burns down your house).

Download the Intel driver and utility set for Windows from the Intel website using a wired connection. Under Ubuntu the card seemed to work first time I rebooted into it. I just had to connect to the WLAN.

UPDATE:

I fixed it properly using a half size to full size Mini PCI-E (PCI Express) adapter converter bracket by Shenzhen Fenvi Technology Co., Ltd. in Guangdong. I had found it on Alibaba. I paid $9.50 by Paypal and a bit over a week later five sets of brackets and matching screws arrived by mail from Hong Kong (one set is only $1.90 but the minimum order was 5, so that’s what I ordered). The brackets come with about a dozen each of two kinds of screws. Four of the smaller screws worked fine for me.

VIA PC3500 board revives old eMachines PC

Last September one of my desktop machines died and I bought a new Windows 7 machine to replace it. Today I brought it back to life again by transplanting a motherboard from an old case that I had been using as my previous Linux server. The replacement board is a VIA MM3500 (also known as VIA PC3500), with a 1.5 GHz VIA C7 CPU, 2 GB of DDR2 RAM and on-board video. It still has two IDE connectors as well as two SATA connectors, allowing me to use both my old DVD and parallel ATA HD drives, as well as newer high capacity SATA drives.

After the motherboard swap I had to reactivate Windows XP because it detected a major change in hardware. Most of the hardware of the new board worked immediately, I could boot and had Internet access without any reconfiguration. When I started with the new machine. I just had to increase video resolution from the default 640×480 to get some dialogs working.

I then downloaded drivers for the mother board and video from the VIA website. I now have the proper CN896 (Chrome IGP9) video driver working too.

When I tested the board as a server with dual 1 TB drives (RAID1), it was drawing 41W at idle. Running in my eMachines T6212 case with a single PATA hard drive it draws 38W at idle.

Before removing the old motherboard I made a note of all the cable connections on both motherboards. The front-mounted USB ports and card reader have corresponding internal cables, which connected to spare on-board USB connectors. The analog sound connectors connect to the motherboard too. The only port at the front left unconnected was the IEEE-1394 (FireWire / iLink) port, which has no counterpart on the VIA board.

It feels great to have my old, fully configured machine with all its data and applications back thanks to a cheap motherboard that works flawlessly.

Ubuntu 11.4, GA-H67MA-UD2H-B3, EarthWatts EA-380D, Centurion 5 II, 5K3000

CoolerMaster Centurion 5 II

It’s been 2 months since I have written a blog post that wasn’t about the Tohoku earthquake and tsunami or the Fukushima 1 nuclear disaster, but today I am taking a break from those subjects. The reason is that I replaced my local Ubuntu server with newer hardware. The primary requirements were:

  • GNU/Linux (Ubuntu)
  • Reasonably low power usage
  • Large and very reliable storage
  • Affordability

I was considering boards ranging from the new AMD Zacate E-350 dual core to LGA-1155 (“Sandy Bridge”) boards with the Core i5 2500K. First Intel’s P67/H67 chip set problems and then the disaster in Japan prompted me to postpone the purchase.

Finally I picked the GigaByte GA-H67MA-UD2H-B3, a MicroATX board with 4 SIMM slots in conjunction with the Core i3 2100T, a 35W TDP part with dual cores and 4 threads. The boxed version of the Intel chip comes with a basic fan that didn’t sound too noisy to me. I installed two 4 GB DDR3 modules for a total of 8 GB of RAM, with two slots still available. When you install two memory modules on this board you should install them in memory slots of the same colour (either the blue or the white pair) to get the benefit of dual channel.

Gigabyte GA-H67MA-UD2H-B3

I chose a H67 board because of the lower power usage of the on-chip video and the 2100T has the lowest TDP of any Core 2000 chip. I don’t play games and my video needs are like for basic office PCs. Unlike P67 boards, H67 boards can not be overclocked. If you’re a gamer and care more about ultimate performance than power usage you would probably go for a P67 or Q67 board with an i5 2500K or i7 2600K with a discrete video card.

To minimize power use at the wall socket I picked an 80 Plus power supply (PSU), the Antec EarthWatts EA-380D Green. It meets the 80 Plus Bronze standard, which means it converts AC to DC with at least 82% efficiency at 20% load, at least 85% load at 50% load and at least 82% at full load. It’s the lowest capacity 80Plus PSU I could find here. 20% load for a 380W PSU is 76W. Since the standard does not specify the efficiency achieved below 20% of rated output and typically efficiency drops at the lower end, it doesn’t pay to pick an over-sized PSU.

Disk storage is provided by four Hitachi Deskstar 5K3000 drives of 2 TB each (HDS5C3020ALA632). These are SATA 6 Gbps drives, though that was not really a criterium (the 3 Gbps interface is still fast enough for any magnetic disks). I just happened to find them cheaper than the Samsung HD204UI that I was also considering and the drive had good reports from people who had used them for RAID5. The 2TB Deskstar is supposed to draw a little over 4W per drive at idle. I don’t use 7200 rpm drives in my office much because of heat, noise and power usage. Both types that I had considered have three platters of 667 GB each instead of 4 platters of 500 GB in older 2 TB drives: Fewer platters means less electricity and less heat. A three platter 2 TB drive should draw no more power than a 1.5 TB (3×500 TB) drive.

There are “enterprise class” drives designed specifically for RAID, but they cost two to three times more than desktop drives — so much for the “I” in RAID that is supposed to stand for “inexpensive”. These drives support a special error handling mode known as CCTL or TLER which some hardware RAID controllers and Windows require, but apparently the Linux software RAID driver copes fine with cheap desktop drives. The expensive drives also have better seek mechanisms to deal with vibration problems, but at least some of those vibration problems are worse with 7200 rpm drives than the 5400 rpm drives that I tend to buy.

Motherboard, PSU and 4 RAID drives in case

The case I picked was the CoolerMaster Centurion 5 II, which as you can see above is pretty large for a MicroATX board like the GA-H67MA-UD2H-B3, but I wanted enough space for at least 4 hard disks without crowding them in. Most cases that take only MicroATX boards and not full size ATX tend to have less space for internal hard disks or squeeze them in too tightly for good airflow. This case comes with two 12 cm fans and space to install three more 12 or 14 cm fans, not that I would need them. One of these fans blows cool air across the hard disks, which should minimize thermal problems even if you work those disks hard.

One slight complication was that the hard disks in the internal 3 1/4″ slots needed to be installed the opposite way most people expect: You have to take off both covers of the case, then connect power and SATA cables from the rear end (view to bottom of the motherboard) after sliding the drives in from the front side (view to top of motherboard). Once you do that you don’t even need L-shaped SATA cables. I could use the 4 SATA 6 Gbps cables that came with the GigaByte board. Most people expect to be able to install the hard disks just opening the front cover of the case and then run into trouble. It’s not a big deal once you figure it out, but quite irritating until then.

4 RAID drives in case

I installed Ubuntu 11.4, which has just been released, using the AMD64 alternate CD using a USB DVD drive. I configured the space for the /boot file system as a RAID1 with 4 drives and the / file system as a RAID6 with 4 drives with most of the space. Initially I had problems installing Grub as a boot loader after the manual partitioning, but the reason was that I needed to create a “bios_grub” partition on every drive before creating my boot and data RAID partitions.

RAID6 is like RAID5 but with two sets of parity data. Where the smallest RAID5 consists of three drives, a minimal RAID6 has four, with both providing two drives’ worth of net storage space. A degraded RAID6 (i.e. with one dead drive) effectively becomes a RAID5. That avoids nasty surprises that can happen with RAID5 when one of the other disks goes bad during a rebuild of a failed drive. If you order a spare when you purchase a RAID5 set and plan to keep the drive in a drawer until one of the others fails, you might as well go for a RAID6 to start with and gain the extra safety margin from day 1.

I had problems getting the on-board network port to work, so I first used a USB 2.0 network adapter and later installed an Intel Gigabit CT Desktop Adapter (EXPI9301CT). With two network interfaces you can use any Linux machine as a broadband router, there are various pre-configured packets for that.

While the RAID6 array was still syncing (writing checksums computed from data on two drives to two other drives) and therefore keeping all disks and partly the CPU busy the machine was drawing about 58W at the wall socket, as measured by my WattChecker Plus. Later, when the RAID had finished rebuilding and the server was just handling my spam feed traffic, power usage dropped to 52W at the wall socket. That’s about 450 kWh per year.

The total cost for the server with Core i3 2100T, 8 GB DDR3 RAM (1333), H67 MicroATX board, PCIe Ethernet card, 4 x 2 TB SATA drives, case and 380W PSU was just under 80,000 yen including tax, under US$1,000.

Nokia’s suicidal alliance with Microsoft

Much has been written about Nokia’s alliance with Microsoft announced last month. I can understand how Nokia CEO Stephen Elop, an ex-Microsoft employee who until recently was its 7th biggest shareholder, would have made this decision that benefited his former employer, but why did Nokia’s board of directors ever agree to this move?

Under attack from the iPhone and Android, Nokia had to take action, but in my opinion this move is almost the worst possible choice. It will be an unmitigated disaster for Nokia. I am not just thinking of countless development engineers who will undoubtedly be laid off now that Nokia will be buying in Windows Phone 7 (WP7) instead of developing operating system software in-house. No, it’s also a major strategic error for the company as a whole and I’ll explain why.

Nokia used to have a great brand name with consumers. Now Symbian phones have “OBSOLETE!” stamped all over them, but that’s all Nokia will have to sell for at least another year. Who is going to buy those obsolete phones, other than at rock-bottom prices? It will be ugly for Nokia’s cash flow. How on earth does Nokia believe it can still sell 150 million Symbian phones between now and their WP7 models replacing them? They’re dead in the water.

I can’t see that Intel would be pleased about what that all means for their cooperation on MeeGo, if WP7 is the future.

In 2008 Nokia acquired Norwegian company Trolltech, developers of the well-regarded Qt cross-platform application and user interface framework. Licensing to commercial users of Qt will be now be transferred to Digia PLC of Finland. Qt will not be ported to WP7. Only a few months ago Stephen Elop still talked about Qt being the common interface for Symbian and MeeGo. Qt was supposed to be the element that ties together Symbian and MeeGo in the mobile world. With Symbian dead and MeeGo on life support and a categorical “NO!” on Qt on WP7, Qt has no future left on mobile. But what else should one expect from a proprietary software company like Microsoft? They have never been keen on applications being ported from Windows to other operating systems, so they want people to use Microsoft tools only.

Nokia’s name is dirt within their developer community because after the announcement the Symbian ecosystem is dead, whatever Nokia would have us believe. It is also hard to believe that Elop had no plans about WP7 a few months ago, when Nokia still fed developers their Symbian / MeeGo / Qt strategy. Many developers must feel deceived. It will be hard for Nokia to regain their trust.

Several hardware makers had worked closely with Microsoft on the previous generation of its phone platform (Windows Mobile), who are now firmly in the Android camp. For example, HTC built the first Microsoft Windows based smartphone in 2002, but released an Android phone in 2008 and shifted the core of its smartphone business to that platform the following year (my Google Ion phone is made by HTC). Though it also offers some WP7 models, the bulk of its smartphone business is now Android.

With Windows Mobile, Microsoft could not translate its dominance on the desktop into traction in the mobile market, so it dumped Windows Mobile, with no compatible upgrade path to WP7. Developers had to rewrite apps from scratch. These early Windows Mobile supporters learned a lesson with Microsoft that Nokia is yet to learn, the hard way: Microsoft always does what’s good for Microsoft, not for its customers or business partners.

Nokia is betting the company on an unproven challenger that is entering the market behind three bigger established competitors (Google, Apple, RIM). Late last year Microsoft boasted ‘sales’ of 1.5 million WP7 phones over a period six weeks. That sounds significant, but what they actually meant by that were phones stuffed into the sales channel, mostly still sitting on shelves at mobile phone stores and not activated phones ringing in the pockets of retail customers. At the same time Google was activating that many Android phones every five days (every 5 1/2 days in the case of the iPhone).

No matter how much market share Nokia will lose over the next few years, whatever market share is left for Nokia with WP7 will still be a gain for Microsoft. And as long as Microsoft still has a steady cash flow from Windows 7 licenses and Microsoft Office it won’t be wiped out by a lukewarm reception for WP7 in the market, which is more than can be said for Nokia.

So why did Nokia make this risky decision? They must have come to the brutal conclusion that the company could not survive long term while still developing their own mobile OSes. Nokia only saw a choice between either switching to Android or to WP7 (or going under).

With Android they would largely have had to compete on the merits of their hardware, as every other Android OEM offers essentially the same software / marketplace “ecosystem”. Nokia didn’t want to compete on price with Asian manufacturers (which, as an aside, is exactly what they’ll have to do with their dead-end Symbian phones for the next year or more, since there will be little new software developed for them now). So if Nokia couldn’t be the top dog amongst Android makers, they could turn the other way and at least take whatever sweeteners they could get from Microsoft, while cutting back their software R&D costs and cutting jobs to weather the storm.

The biggest problem with that strategy in my opinion is that a few years down the road they’ll probably realize that WP7 was a dead end too. Then they’ll still have to make that switch to Android, but having already lost a few years, their good name and a lot of good staff it will be even harder.