strcpy data corruption on Core i7 with Linux 64bit

If you’re C programmer, does this code look OK to you?

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main(int argc, char* argv[])
{
  char szBuffer[80];
  strcpy(szBuffer, "abcdefghijklmnopqrstuvwxyz");
  printf("Before: %s\n", szBuffer);
  strcpy(szBuffer, szBuffer+2);
  printf(" After: **%s\n", szBuffer);

  return 0;
}

Here is the output on my server, a Core i7 running Debian 6:

Before: abcdefghijklmnopqrstuvwxyz
After: **cdefghijklmnopqrstuvwzyz

What the program does is dropping two characters from a text string in a buffer, moving the rest of it left by two characters. You expect the moved characters to stay in sequence, but if you compare the last three characters of the output you see that that isn’t the case. The ‘x’ has been obliterated by a duplicate ‘z’. The code is broken.

It’s a bug, and not a straightforward one, as I’ll explain.

I first came across it a couple of months ago, as I was moving some code of mine from an Athlon 64 Linux server to a new Intel Core i7 server. Subsequently I observed strange corruption in data it produced. I tracked it down to strcpy() calls that looked perfectly innocent to me, but when I recoded them as in-line loops doing the same job the bug went away.

Yesterday I came across the same problem on a CentOS 6 server (also a Core i7, x86_64) and figured out what the problem really was.

Most C programmers are aware that overlapping block moves using strcpy or memcpy can cause problems, but assume they’re OK as long as the destination lies outside (e.g. below) the source block. If you read the small print in the strcpy documentation, it warns that results for overlapping moves are unpredicable, but most of us don’t take that at face value and think we’ll get away with it as long as we observe the above caveat.

That is no longer the case with the current version of the GNU C compiler on 64-bit Linux and the latest CPUs. The current strcpy implementation uses super-fast SSE block operation that only reliably work as expected if the source and destination don’t overlap at all. Depending on alignment and block length they may still work in some cases, but you can’t rely on it any more. The same caveat theoretically applies to memcpy (which is subject to the same warnings and technically very similar), though I haven’t observed the problem with it yet.

If you do need to remove characters from the middle of a NUL terminated char array, instead of strcpy use your own function based on the memmove and strlen library functions, for example something like this:

void myStrCpy(char* d, const char* s)
{
  memmove(d, s, strlen(s)+1);
}
...
  char szBuffer[80];
...
  // remove n characters i characters into the buffer:
  myStrCpy(szBuffer+i, szBuffer+i+n);

I don’t know how much existing code the “optimzed” strcpy library function broke in the name of performance, but I imagine there are many programmers out there that got caught by it like I was.

See also:

Gateway M-6750 with Intel Ultimate-N 6300 under Ubuntu and Vista

My Gateway M-6750 laptop uses a Marvell MC85 wireless card, for which there is no native Linux driver. Previously I got it working with Ubuntu 9.10 using an NDIS driver for Windows XP. Recently I installed Ubuntu 11.04 from scratch on this machine (i.e. wiping the Linux ext4 partition) and consequently lost wireless access again.

Instead of trying to locate, extract and install the XP NDIS driver again, this time I decided to solve the problem in hardware. Intel’s network hardware has good Linux support. I ordered an Intel Centrino Ultimate-N 6300 half-size mini PCIE networking card, which cost me about $35. Here is how I installed it.

Here is a picture of the bottom of the laptop. Remove the three screws on the cover closest to you (the one with a hard disk icon and “miniPCI” written on it) and open the cover. Use a non-magnetic screwdriver because the hard disk is under that cover too. As a matter of caution, use only non-magnetic tools near hard disks or risk losing your data.

Remove the screw that holds the MC85 card in the mini PCI slot on the right. Remove the network card. Carefully unplug the three antenna wires. Connect those wires to the corresponding locations on the Intel card. Insert the Intel card into the socket on the left. Note: I had first tried the Intel card in the socket on the right but in that case it always behaved as if the Wireless On/Off switch was in the Off position, regardless of its actual state. Even rebooting didn’t make it recognize the switch state. The left mini PCI socket did not have this problem 🙂

Because the Intel card is a half size card you will also need a half size to full size miniPCI adapter to be able to screw down the card to secure it. Instead I simply used a stiff piece of cardboard (an old business card) to hold it in place and closed the cover again. If you take your laptop PC on road a lot I recommend doing it properly (don’t sue me if the cardboard trick melts your motherboard or burns down your house).

Download the Intel driver and utility set for Windows from the Intel website using a wired connection. Under Ubuntu the card seemed to work first time I rebooted into it. I just had to connect to the WLAN.

UPDATE:

I fixed it properly using a half size to full size Mini PCI-E (PCI Express) adapter converter bracket by Shenzhen Fenvi Technology Co., Ltd. in Guangdong. I had found it on Alibaba. I paid $9.50 by Paypal and a bit over a week later five sets of brackets and matching screws arrived by mail from Hong Kong (one set is only $1.90 but the minimum order was 5, so that’s what I ordered). The brackets come with about a dozen each of two kinds of screws. Four of the smaller screws worked fine for me.

VIA PC3500 board revives old eMachines PC

Last September one of my desktop machines died and I bought a new Windows 7 machine to replace it. Today I brought it back to life again by transplanting a motherboard from an old case that I had been using as my previous Linux server. The replacement board is a VIA MM3500 (also known as VIA PC3500), with a 1.5 GHz VIA C7 CPU, 2 GB of DDR2 RAM and on-board video. It still has two IDE connectors as well as two SATA connectors, allowing me to use both my old DVD and parallel ATA HD drives, as well as newer high capacity SATA drives.

After the motherboard swap I had to reactivate Windows XP because it detected a major change in hardware. Most of the hardware of the new board worked immediately, I could boot and had Internet access without any reconfiguration. When I started with the new machine. I just had to increase video resolution from the default 640×480 to get some dialogs working.

I then downloaded drivers for the mother board and video from the VIA website. I now have the proper CN896 (Chrome IGP9) video driver working too.

When I tested the board as a server with dual 1 TB drives (RAID1), it was drawing 41W at idle. Running in my eMachines T6212 case with a single PATA hard drive it draws 38W at idle.

Before removing the old motherboard I made a note of all the cable connections on both motherboards. The front-mounted USB ports and card reader have corresponding internal cables, which connected to spare on-board USB connectors. The analog sound connectors connect to the motherboard too. The only port at the front left unconnected was the IEEE-1394 (FireWire / iLink) port, which has no counterpart on the VIA board.

It feels great to have my old, fully configured machine with all its data and applications back thanks to a cheap motherboard that works flawlessly.

Ubuntu 11.4, GA-H67MA-UD2H-B3, EarthWatts EA-380D, Centurion 5 II, 5K3000

CoolerMaster Centurion 5 II

It’s been 2 months since I have written a blog post that wasn’t about the Tohoku earthquake and tsunami or the Fukushima 1 nuclear disaster, but today I am taking a break from those subjects. The reason is that I replaced my local Ubuntu server with newer hardware. The primary requirements were:

  • GNU/Linux (Ubuntu)
  • Reasonably low power usage
  • Large and very reliable storage
  • Affordability

I was considering boards ranging from the new AMD Zacate E-350 dual core to LGA-1155 (“Sandy Bridge”) boards with the Core i5 2500K. First Intel’s P67/H67 chip set problems and then the disaster in Japan prompted me to postpone the purchase.

Finally I picked the GigaByte GA-H67MA-UD2H-B3, a MicroATX board with 4 SIMM slots in conjunction with the Core i3 2100T, a 35W TDP part with dual cores and 4 threads. The boxed version of the Intel chip comes with a basic fan that didn’t sound too noisy to me. I installed two 4 GB DDR3 modules for a total of 8 GB of RAM, with two slots still available. When you install two memory modules on this board you should install them in memory slots of the same colour (either the blue or the white pair) to get the benefit of dual channel.

Gigabyte GA-H67MA-UD2H-B3

I chose a H67 board because of the lower power usage of the on-chip video and the 2100T has the lowest TDP of any Core 2000 chip. I don’t play games and my video needs are like for basic office PCs. Unlike P67 boards, H67 boards can not be overclocked. If you’re a gamer and care more about ultimate performance than power usage you would probably go for a P67 or Q67 board with an i5 2500K or i7 2600K with a discrete video card.

To minimize power use at the wall socket I picked an 80 Plus power supply (PSU), the Antec EarthWatts EA-380D Green. It meets the 80 Plus Bronze standard, which means it converts AC to DC with at least 82% efficiency at 20% load, at least 85% load at 50% load and at least 82% at full load. It’s the lowest capacity 80Plus PSU I could find here. 20% load for a 380W PSU is 76W. Since the standard does not specify the efficiency achieved below 20% of rated output and typically efficiency drops at the lower end, it doesn’t pay to pick an over-sized PSU.

Disk storage is provided by four Hitachi Deskstar 5K3000 drives of 2 TB each (HDS5C3020ALA632). These are SATA 6 Gbps drives, though that was not really a criterium (the 3 Gbps interface is still fast enough for any magnetic disks). I just happened to find them cheaper than the Samsung HD204UI that I was also considering and the drive had good reports from people who had used them for RAID5. The 2TB Deskstar is supposed to draw a little over 4W per drive at idle. I don’t use 7200 rpm drives in my office much because of heat, noise and power usage. Both types that I had considered have three platters of 667 GB each instead of 4 platters of 500 GB in older 2 TB drives: Fewer platters means less electricity and less heat. A three platter 2 TB drive should draw no more power than a 1.5 TB (3×500 TB) drive.

There are “enterprise class” drives designed specifically for RAID, but they cost two to three times more than desktop drives — so much for the “I” in RAID that is supposed to stand for “inexpensive”. These drives support a special error handling mode known as CCTL or TLER which some hardware RAID controllers and Windows require, but apparently the Linux software RAID driver copes fine with cheap desktop drives. The expensive drives also have better seek mechanisms to deal with vibration problems, but at least some of those vibration problems are worse with 7200 rpm drives than the 5400 rpm drives that I tend to buy.

Motherboard, PSU and 4 RAID drives in case

The case I picked was the CoolerMaster Centurion 5 II, which as you can see above is pretty large for a MicroATX board like the GA-H67MA-UD2H-B3, but I wanted enough space for at least 4 hard disks without crowding them in. Most cases that take only MicroATX boards and not full size ATX tend to have less space for internal hard disks or squeeze them in too tightly for good airflow. This case comes with two 12 cm fans and space to install three more 12 or 14 cm fans, not that I would need them. One of these fans blows cool air across the hard disks, which should minimize thermal problems even if you work those disks hard.

One slight complication was that the hard disks in the internal 3 1/4″ slots needed to be installed the opposite way most people expect: You have to take off both covers of the case, then connect power and SATA cables from the rear end (view to bottom of the motherboard) after sliding the drives in from the front side (view to top of motherboard). Once you do that you don’t even need L-shaped SATA cables. I could use the 4 SATA 6 Gbps cables that came with the GigaByte board. Most people expect to be able to install the hard disks just opening the front cover of the case and then run into trouble. It’s not a big deal once you figure it out, but quite irritating until then.

4 RAID drives in case

I installed Ubuntu 11.4, which has just been released, using the AMD64 alternate CD using a USB DVD drive. I configured the space for the /boot file system as a RAID1 with 4 drives and the / file system as a RAID6 with 4 drives with most of the space. Initially I had problems installing Grub as a boot loader after the manual partitioning, but the reason was that I needed to create a “bios_grub” partition on every drive before creating my boot and data RAID partitions.

RAID6 is like RAID5 but with two sets of parity data. Where the smallest RAID5 consists of three drives, a minimal RAID6 has four, with both providing two drives’ worth of net storage space. A degraded RAID6 (i.e. with one dead drive) effectively becomes a RAID5. That avoids nasty surprises that can happen with RAID5 when one of the other disks goes bad during a rebuild of a failed drive. If you order a spare when you purchase a RAID5 set and plan to keep the drive in a drawer until one of the others fails, you might as well go for a RAID6 to start with and gain the extra safety margin from day 1.

I had problems getting the on-board network port to work, so I first used a USB 2.0 network adapter and later installed an Intel Gigabit CT Desktop Adapter (EXPI9301CT). With two network interfaces you can use any Linux machine as a broadband router, there are various pre-configured packets for that.

While the RAID6 array was still syncing (writing checksums computed from data on two drives to two other drives) and therefore keeping all disks and partly the CPU busy the machine was drawing about 58W at the wall socket, as measured by my WattChecker Plus. Later, when the RAID had finished rebuilding and the server was just handling my spam feed traffic, power usage dropped to 52W at the wall socket. That’s about 450 kWh per year.

The total cost for the server with Core i3 2100T, 8 GB DDR3 RAM (1333), H67 MicroATX board, PCIe Ethernet card, 4 x 2 TB SATA drives, case and 380W PSU was just under 80,000 yen including tax, under US$1,000.

Nokia’s suicidal alliance with Microsoft

Much has been written about Nokia’s alliance with Microsoft announced last month. I can understand how Nokia CEO Stephen Elop, an ex-Microsoft employee who until recently was its 7th biggest shareholder, would have made this decision that benefited his former employer, but why did Nokia’s board of directors ever agree to this move?

Under attack from the iPhone and Android, Nokia had to take action, but in my opinion this move is almost the worst possible choice. It will be an unmitigated disaster for Nokia. I am not just thinking of countless development engineers who will undoubtedly be laid off now that Nokia will be buying in Windows Phone 7 (WP7) instead of developing operating system software in-house. No, it’s also a major strategic error for the company as a whole and I’ll explain why.

Nokia used to have a great brand name with consumers. Now Symbian phones have “OBSOLETE!” stamped all over them, but that’s all Nokia will have to sell for at least another year. Who is going to buy those obsolete phones, other than at rock-bottom prices? It will be ugly for Nokia’s cash flow. How on earth does Nokia believe it can still sell 150 million Symbian phones between now and their WP7 models replacing them? They’re dead in the water.

I can’t see that Intel would be pleased about what that all means for their cooperation on MeeGo, if WP7 is the future.

In 2008 Nokia acquired Norwegian company Trolltech, developers of the well-regarded Qt cross-platform application and user interface framework. Licensing to commercial users of Qt will be now be transferred to Digia PLC of Finland. Qt will not be ported to WP7. Only a few months ago Stephen Elop still talked about Qt being the common interface for Symbian and MeeGo. Qt was supposed to be the element that ties together Symbian and MeeGo in the mobile world. With Symbian dead and MeeGo on life support and a categorical “NO!” on Qt on WP7, Qt has no future left on mobile. But what else should one expect from a proprietary software company like Microsoft? They have never been keen on applications being ported from Windows to other operating systems, so they want people to use Microsoft tools only.

Nokia’s name is dirt within their developer community because after the announcement the Symbian ecosystem is dead, whatever Nokia would have us believe. It is also hard to believe that Elop had no plans about WP7 a few months ago, when Nokia still fed developers their Symbian / MeeGo / Qt strategy. Many developers must feel deceived. It will be hard for Nokia to regain their trust.

Several hardware makers had worked closely with Microsoft on the previous generation of its phone platform (Windows Mobile), who are now firmly in the Android camp. For example, HTC built the first Microsoft Windows based smartphone in 2002, but released an Android phone in 2008 and shifted the core of its smartphone business to that platform the following year (my Google Ion phone is made by HTC). Though it also offers some WP7 models, the bulk of its smartphone business is now Android.

With Windows Mobile, Microsoft could not translate its dominance on the desktop into traction in the mobile market, so it dumped Windows Mobile, with no compatible upgrade path to WP7. Developers had to rewrite apps from scratch. These early Windows Mobile supporters learned a lesson with Microsoft that Nokia is yet to learn, the hard way: Microsoft always does what’s good for Microsoft, not for its customers or business partners.

Nokia is betting the company on an unproven challenger that is entering the market behind three bigger established competitors (Google, Apple, RIM). Late last year Microsoft boasted ‘sales’ of 1.5 million WP7 phones over a period six weeks. That sounds significant, but what they actually meant by that were phones stuffed into the sales channel, mostly still sitting on shelves at mobile phone stores and not activated phones ringing in the pockets of retail customers. At the same time Google was activating that many Android phones every five days (every 5 1/2 days in the case of the iPhone).

No matter how much market share Nokia will lose over the next few years, whatever market share is left for Nokia with WP7 will still be a gain for Microsoft. And as long as Microsoft still has a steady cash flow from Windows 7 licenses and Microsoft Office it won’t be wiped out by a lukewarm reception for WP7 in the market, which is more than can be said for Nokia.

So why did Nokia make this risky decision? They must have come to the brutal conclusion that the company could not survive long term while still developing their own mobile OSes. Nokia only saw a choice between either switching to Android or to WP7 (or going under).

With Android they would largely have had to compete on the merits of their hardware, as every other Android OEM offers essentially the same software / marketplace “ecosystem”. Nokia didn’t want to compete on price with Asian manufacturers (which, as an aside, is exactly what they’ll have to do with their dead-end Symbian phones for the next year or more, since there will be little new software developed for them now). So if Nokia couldn’t be the top dog amongst Android makers, they could turn the other way and at least take whatever sweeteners they could get from Microsoft, while cutting back their software R&D costs and cutting jobs to weather the storm.

The biggest problem with that strategy in my opinion is that a few years down the road they’ll probably realize that WP7 was a dead end too. Then they’ll still have to make that switch to Android, but having already lost a few years, their good name and a lot of good staff it will be even harder.

Outlook Express missing margins while printing

I recently had problems printing out emails in Outlook Express, the mail client I use on Microsoft Windows. Usually Outlook Express will leave about 2 cm blank between the edge of the paper and the start of the text. Instead it started so far to the left that the first character was cut off.

Outlook Express has no “Page Setup” option in its File menu to configure margins. So how come the margins had changed and how could I fix it? As it turned out, Outlook Express considers itself a part of Internet Explorer, components of which it shares for rendering text and other purposes.

The reason I lost the margin was that when I printed my nengajō (Japanese New Year’s Cards) for 2011 two months ago I had tweaked the IE printing margins to the minimum. Since I usually use Mozilla Firefox or Google Chrome as a browser, I had not noticed I had left IE with those settings — until I came to print emails in Outlook Express, that is.

The fix was easy: Launch Internet Exporer, go to File => Page Setup => Margins and set Left / Right / Top / Bottom back to 0.75 if using inches or the equivalent in centimetres if using metric. Voila, OE will print with the standard margins again.

The return of the most robust router (WHR-HP-G54 / DD-WRT)

There, I’ve done it: I replaced my fancy new broadband router, a Buffalo WZR-HP-G300NH that supports 802.11n (up to 300 Mbps) with an older model that I had first purchased two years ago, the WHR-HP-G54 (802.11b/g, up to 54 Mbps). Besides supporting the newer, faster, better wireless standard, the newer router had a faster CPU, a USB port and much more RAM and ROM that should make it much more expandable. The trouble was, it was not as robust as the The most robust router I ever used, the WHR-HP-G54. Both routers support DD-WRT and OpenWRT, GNU/Linux-based open source router firmware.

First I had lots of problems with the WZR-HP-G300NH under DD-WRT, which apparently wasn’t ready for prime time on this router yet. The signal was too weak, I couldn’t connect from some parts of the building. Then I switched to OpenWRT and things looked better, but then I kept losing wireless connectivity on all mobile computers and smartphones in the building at random intervals. Only a router reset would allow them to reconnect, there was no other cure. Perhaps that would have been tolerable when it happened once a week, but it seemed to get worse. Finally, after having to reboot the router three times in one day I had enough. I found one supplier that still had stocks of the old WHR-HP-G54 and promptly ordered one.

The new old router arrived two days later. I only briefly accessed it from a PC without a WAN connection as a sanity check, before flashing it with dd-wrt.v24_mini_generic.bin using TFTP and then dd-wrt.v24-10070_crushedhat_4MB.bin using the DD-WRT web interface. I did perform a 30-30-30 reset after the mini flash. After the second flash I restored an NVRAM backup from the previous router of the same type saved back in June from the same firmware. Then I cloned the MAC address of the WAN port of my WZR-HP-G300NH so the latest router could keep on using the same broadband IP acquired via DHCP by its predecessor. After moving the WAN and LAN cables from the old router to the new one, everything just worked, including my ipv6 setup via Hurricane Electric. I just had to connect the wireless clients to the new SSID. Since then I have not reset the router once.

When new versions of DD-WRT and/or OpenWRT come out for the WZR-HP-G300NH I may give it a try again, but more likely I’ll just keep it as spare. I expect my second WHR-HP-G54 to work every bit as well as my first one. I don’t know how much the software was to blame and how much the hardware for the disappointing results with the newer design, but suspect that 802.11n may be too complex for its own good. There has to be a reason why it remained stuck in “Draft N” stage for so long…

I will pick a reliable router like the WHR-HP-G54 running DD-WRT over one that has a more fancy specification any day because reliability is what it takes to get the job done. If you can’t find the WHR-HP-G54, another good basic choice is the WRT54GL that also supports DD-WRT, but unfortunately it is not sold here in Japan and Amazon.com won’t ship it here from the U.S.

See also:

Skype on Android 2.1 but not in the US

A year and a half after Skype launched a version for the iPhone (March 2009) it finally released Skype for Android for customers outside the US. In the US Skype has already been available for Verizon customers for a couple of months, but if you’re with a different US network you’ll still be out of luck.

Even outside the US, Android versions before 2.1 are not supported. My Google Ion which comes with Android 1.6 got this message:

Sorry, Skype is unavailable for your mobile. We add new handsets all the time so check back soon.

Last month, about 30% of Android users were still using 1.5 or 1.6.

This makes me even keener to find out when an Android 2.2 upgrade will become available for the Google Ion, as announced in May 2010.

Epson PM-A950 under Windows 7 64bit

Earlier this month, an old eMachines T6212 bought in April 2005, a humble single core 1.6 GHz Athlon64 that had served me faithfully for more than 5 years, finally died. So two weeks ago I bought an Acer Aspire ASM3910-N54E, a Core i5-650 machine with 4 GB of RAM (max. 8 GB) and a 640 GB hard disk. It came with Windows 7 Home 64bit.

I replaced the C: drive with a 1 TB drive and added another 1.5 TB drive that I previously used in a USB-enclosure. I am using the on-board video with dual 1280×1024 monitors (Dell 1905FP), hooked up via an analog VGA cable and a digital HDMI-to-DVI cable.

The best thing I can say about Windows 7 is that it’s not as bad as Vista. I wish I could have stuck with Windows XP, but at least Windows 7 doesn’t get in the way as much as Vista did. It feels a bit more like Mac OS X, if that is what you like. It’s going to get more and more difficult to get drivers for new hardware that still support XP, but on the other hand older hardware may have problems working with Windows 7, for example my old Logitech QuickCam Zoom is not supported by Windows 7.

Epson PM-A950 printer driver

Today I tried to print from the new machine for the first time and found I needed a new printer driver for my almost 4 year old Epson PM-A950 USB printer/scanner. Though Microsoft’s documentation states that the printer is supported by Windows 7 out of the box, it will do so only using a generic Epson printer definition which probably will not support all the functionality. So I searched the Epson Japan website and found these two drivers (the 64bit version worked fine for my version of Windows 7):

  • Windows 7 32bit / Windows Vista 32bit / Windows XP / Windows 2000:
    http://www.epson.jp/dl_soft/file/7461/a950f652.EXE
  • Windows 7 64bit / Windows Vista 64bit / Windows XP x64 Edition:
    http://www.epson.jp/dl_soft/file/7462/a950h652.EXE

Energy efficiency

So far I’m very happy with the new machine. The machine draws about 40W when idle, considerably less than its less powerful predecessor (69W). The lastest Core i3 and Core i5 machines are very energy efficient. My i5 actually did better than a VIA MM3500 (1.5 GHz single core VIA C7). The only x86-compatible machines I have that beat the i5 on power usage at idle are either notebooks or are desktops built using notebook chipsets (i.e. the Mac Mini).

Excessive JPEG compression on Android phones

A few weeks ago I got my first smart-phone, an HTC Magic (aka Google Ion or myTouch 3G) which uses Android 1.6.

Originally I had wanted to get the HTC Desire with Android 2.1 from Softbank, but they had no more stocks of the old model and weren’t going to start shipping the new model until October. I couldn’t wait that long. That’s how I ended up getting an Android phone from the US.

I first transplanted the USIM from my almost three year old Softbank Samsung 707SCII into the Android phone, which wasn’t locked to any provider. I could then make calls here in Japan.

Next I added Softbank’s “smart phone pakehodai” (smart phone unlimited data) plan to my existing contract, after telling the company that I was going to use my existing USIM in an imported Android phone. They didn’t raise any objections to that. The plan is about 5700 yen per month (about US$67), plus 315 yen to enable web access and mail (US$3.60), which I had previously disabled as I was only using SMS besides voice calls. I configured APNs for accessing the Softbank network using this link, which then gave me full web access from my new phone even when not on my wireless LAN at home.

So far it has been a fun experience and I’m still exploring new features and applications.

The application I enjoy most so far is Google Maps. Having moved from the semirural suburbs of Yokohama to a densely populated part of Tokyo recently, I’m now exploring local back streets on foot or on the bicycle as well as riding trains, of which there are plenty. Google Maps will easily find me a train connection to anywhere in this city of 13 million people, including directions for walking to and from stations and down to the minute connection schedules (Japanese trains are famously punctual).

I was disappointed however by the picture quality of the 3 MP camera (1536 by 2048 pixels) on the phone, not that my expectations were too high to start with. But I was shocked to see that when I copied these 3 MP image files off the phone using a USB cable, they were only 330 to 700 MB (500 MB on average) in size even when taking pictures at the highest quality settings. This is 2 to 3 times smaller than typical 3 MP cameras.

My old Sony P8 (also a 3 MP camera) averaged around 1.3 MB per image. One Megabyte or more per image is fairly typical for high quality settings at 3 MP. That means the Android camera must be using very aggressive JPEG compression settings, which reduce detail and produce artifacts, to squeeze pictures into 40% of the space used on other cameras. And you can really tell from just looking at the pictures: They look somewhat blurred and fuzzy, not as sharp and crisp as you’d expect even from a modest 3 MP camera.

What’s worse, I could not find any setting that would let me change this. A search on Google confirmed that others using different Android phones have the same problem, but currently no solution.

I hope Google will address this problem on the Android 2.2 upgrade, because with these software settings the capabilities of the hardware are wasted, even more so on 5 MP or 8 MP camera models. It makes no sense to aggressively compress pictures when the user has selected optimum quality, especially on a camera that can be expanded with up to 16 GB per microSD memory card.