wget: “Issued certificate has expired” after September 30, 2021

Two websites that I download data from using automated processes stopped giving me new data from October 1. When I investigated the problem, I could see an error message from the wget program in Linux:

Connecting to SOME.HOSTNAME (SOME.HOSTNAME)|1.2.3.4|:443… connected.
ERROR: cannot verify SOME.HOSTNAME’s certificate, issued by ‘/C=US/O=Let’s Encrypt/CN=R3’:
Issued certificate has expired.
To connect to SOME.HOSTNAME insecurely, use `–no-check-certificate’.

The quick fix, obviously, was to add the –no-check-certificat to the command line, which allows the download to go ahead, but what’s the root cause? My assumption was that the site owner had let an SSL certificate expire, but after it happened with a second site from the same date, I got suspicious. It turns out, Let’s Encrypt which is used by many websites for free encryption certificates previously had a certificate that expired on September 30 and which has been replaced by a new certificate but many pieces of software don’t retrieve the new certificate. That’s because it’s signed with a new root certificate that a lot of older software don’t trust yet. They need an updated of the root certificate store.
In my case, running

sudo yum update

would update the ca-certificates package and that allowed wget to trust the new certificate.
Please see these links for more information:

Outlook Express Error 0x800CCC0B and the End of TLS 1.0 (Deprecated SSL Protocol)

Microsoft Outlook Express (OE) is an obsolete mail client that was available in Microsoft Windows XP, Windows 2003 Server and older Microsoft operating systems. It was no longer available on Windows Vista and later, though Windows Live Mail is relatively close in user interface and appearance.

Despite being obsolete and only working on operating systems no longer supported or updated by Microsoft, it still has some users who prefer its simple but powerful user interface. Some of those users will have had a frustrating experience recently, when various mail servers stopped working for outbound mail in OE. Specifically, these are mail servers that use SSL on submission port 465 or 587 for SMTP.

Secure Socket Layer (SSL) is a mechanism for encrypting data between a client and a server. You may know it from website URIs starting with “https:” and web sessions displaying a padlock symbol next to the URI. There are various protocol versions that can implement this encryption layer. One of these, TLS 1.0 which was conceived in 1999, has now been officially deprecated (made officially obsolete) as of the end of June 2018. Software now has to use more recent protocols, such as TLS 1.1, TLS 1.2 or the recently defined TLS 1.3.

Unfortunately, TLS 1.0 is all that OE will speak. It does not understand TLS 1.1 or later. Therefore it can not pick up mail from a POP server using SSL on port 995 or an IMAP server on port 993 or send mail to an SMTP server on port 465 (or 587) with SSL enabled.

Workaround
The only workaround I am aware of (other than switching to a more modern mail client) is to use Stunnel, a tool for Windows or Linux that acts as a proxy. You can configure it to establish an SSL connection to a given host and port when a connection to a given local port is made. Thus you could configure OE to connect to port 9465 on the machine running Stunnel, which might then connect via SSL to smtp.example.com:465 using a more modern TLS version supported by Stunnel (but not directly by OE).

Example
Let’s say Outlook Express was configured to submit outbound mail to smtp.outboundmailserver.com, port 587 via SSL/TLS. This is our SMTP server. Once this server refuses to allow TLS 1.0 connections, Outlook Express will no longer work. Let’s say we also have a simple Linux server mylinuxserver.com. This could even be something like a Raspberry Pi single board computer booting off flash memory. It can run on a local IP in our LAN, if you don’t need to have access from outside your building (OE running on a desktop). On this server we install the stunnel package:

sudo yum install stunnel

Please read the documentation on how to enable the service and have it auto-start when the Linux server reboots.

Next we configure stunnel to act as a client on our behalf and configure it to accept TLS 1.0 connections from us and forward them to the real POP3, SMTP or IMAP server using the latest TLS on our behalf. We will create lines like these in /etc/stunnel/stunnel.conf:

client = yes

;cert = /etc/pki/tls/certs/stunnel.pem
;sslVersion = TLSv1
;chroot = /var/run/stunnel
;setuid = nobody
;setgid = nobody
;pid = /stunnel.pid
;socket = l:TCP_NODELAY=1
;socket = r:TCP_NODELAY=1

[smtp-outboundmailserver]
accept = 1587
connect = smtp.outboundmailserver.com:587

Create other entries for the services that you need TLS support for and restart the stunnel service. Then reconfigure Outlook Express to access the Linux host and the port number listed with “accept = ” in place of the original server that refused your Outlook Express TLS 1.0 connection. You should be good to go!

Long term you will still need to migrate to another mail client such as Thunderbird, Windows Mail or OE Classic, but this workaround will buy you some time for that.

Updated jwhois.conf File for CentOS for New gTLDs

The whois command on CentOS 6.x and 7.x doesn’t handle queries for many domains in new Top Level Domains (TLDs) that were added by ICANN in the last few years.

Domains from many of these new TLDs are selling as cheap as $0.99 a pop, making them attractive to snowshoe spammers who create them in large numbers. As a spam researcher, I see lots of new spam domains from TLDs such as .xyz, .online, .top. .club, .services, .win, .site, .bid, .life and .trade.

WHOIS is an important tool for me to track the domain registrants. CentOS uses jwhois as its WHOIS client, which relies on a configuration file to tell it what servers to query for detailed information. The configuration file that comes with recent CentOS versions is woefully out of date.

I have gone through the currently existing TLDs and counted 466 of them that are not supported by jwhois but appear to have a valid WHOIS server. I have been able to verify for about half of these TLDs that the WHOIS server works and have added them to my configuraion file, which you can download here.

Many of the rest of the new TLDs are hosted on Neustar, which performs rate limiting on lookups. Because of that I didn’t fully verify functioning of all those hosts, but I verified that CNAMEs exist for the WHOIS hosts that redirect to Neustar WHOIS servers and tested a small sample of those TLDs.

Adding Free SSL Certificates for HTTPS To Your Websites

I recently received a warning email from Google:

“Starting October 2017, Chrome (version 62) will show a ‘NOT SECURE’ warning when users enter text in a form on an HTTP page, and for all HTTP pages in Incognito mode.

The recommended solution was to migrate the affected website(s) to HTTPS. This requires an SSL certificate. There are many companies selling those for hundreds of dollars. I didn’t really want to spend that money.

It turns out there is a free alternative: The Let’s Encrypt project (https://letsencrypt.org/) provides free SSL certificates with just enough functionality to run SSL with current browsers. It also provides automated tools that greatly assist you in obtaining and installing those certificates.

I had a default SSL host configured on my Apache 2.4 installation (inherited from a different server running Ubuntu) that I had to manually remove.

Then, when all virtual hosts only had port 80 (HTTP) enabled, I could run the certbot tool as root:

# certbot --apache

It enumerates all host names supported by your Apache installation. I ran it repeatedly, for each domain and the corresponding www. host name (e.g. joewein.net, www.joewein.net) in my installation and verified the results, one at a time. It will create a new virtual host file in /etc/httpd/hosts-enabled for those hosts for port 443 (HTTPS). I appended the content of that file to my existing port 80 (HTTP) virtual host file in /etc/httpd/hosts-available for that host name and deleted the new file created by certbot. That way I can track all configuration details for each website for both HTTP and HTTPS in a single file, but this purely a personal choice.

All it takes is an Apache restart to enable the new configuration.

You can test if SSL is working as expected by accessing the website with a browser using https:// instead of http:// at the start of the URI.

If you have iptables rules for port 80, you may want to replicate those for port 443 or the certificate generation / renewal may fail. Also, you want to make sure that SSLv3 is turned off on your Apache installation, to protect against the POODLE vulnerability. This required the following setting in ssl.conf:

/etc/httpd/conf.d/ssl.conf:SSLProtocol all -SSLv2 -SSLv3

The free certificates will expire in 90 days, but it’s recommended to add a daily cron job that requests renewals so that an updated key will be downloaded after 60 days, long before the old key expires. Once that is in place, maintenance of SSL keys will be totally automatic.

UPDATE (2017-11-01): If you’re using WordPress on your website, you should change the WordPress base URI to HTTPS too. To do that, log into the WordPress Dashboard. In there select Settings > General. Change the “http://” in the WordPress Address (URI) and Site Address (URI) fields to “https://” and click the Save Changes button. This ensures that any messages from WordPress to you will include secure URIs.

JWHOIS uses 100% of CPU on CentOS

Occasionally we hit a bug where the ‘whois’ command hangs on one of our CentOS servers and goes CPU-bound. This has been happening on several CentOS versions, including 6.8. Specifically, this is a problem in jwhois, the whois client included in CentOS.

Apparently, CentOS (and RHEL, on whose source code it’s based) is missing a number of fixes that have been added to other Linux versions including Fedora over the last couple of years. So the problem is actually known and a fix has been available for years, it’s just not included in the product.

Comparing the change logs for jwhois between CentOS and Fedora, everything matches up to and including build 4.0-18 in September 2009, but then the two diverge.

On Jan 26, 2010, Fedora received a fix (“Use select to wait for input (patch by Joshua Roys <joshua.roys AT gtri.gatech.edu>)”) for a new 4.0-19 build that resolved bug #469412 for precisely this issue. There are many more changes in Fedora’s jwhois after that, unlike its RHEL and CentOS equivalent, which in all the years since then received only a single update. This is also called 4.0-19, but it was made on Jun 23, 2011 and it includes only two fixes for unrelated issues that were fixed in Fedora’s jwhois updates 4.0-24 (Dec 20, 2010) and 4.0-26 (Mar 15, 2011), but not the earlier select fix or fixes for any of the other issues. CentOS is missing the “jwhois-4.0-select.patch” and that’s why WHOIS hangs.

Upgrading to 14.04.1 LTS or If It Ain’t Broke, Don’t Fix it

I should have left my Ubuntu 12.04 LTS well alone. Yes, it is over 2 years old, but it worked rock solid and I’ve been good about installing updates on it.

I don’t know what devil rode me last Friday, but when the system informed me that an upgrade to 14.04.1 LTS was available, I went ahead and gave it a try. I should have known better.

When the upgrade finished many hours later, POP access to the dovecot server was no longer working and rsync using modules was broken (rsync daemon not running). I had accepted all the defaults to keep existing configuration files during the upgrade. It turned out that dovecot needed some changes for namespace inbox:

namespace inbox {
...
inbox=yes
}

The rsync daemon needed to be manually enabled again via

sudo vi /etc/default/rsync

RSYNC_ENABLE=true

Hopefully I won’t stumble across more problems that will need fixing, but the experience was a reminder not to needlessly mess with a working system.

Adding sudo on Debian Linux

For a long time I had been using the sudo command on Ubuntu and other Linux versions, but my main server did not have it installed. I always had to use ‘su’ with the root password to do be able to do administrative jobs. It turns out it was really easy to fix. Simply follow these steps as root (using your actual user name in place of jsmith):

apt-get install sudo
adduser jsmith sudo

This installs the sudo package, creates a sudo user group and the /etc/sudoers configuration file. It then adds your user to the user group sudo, which per the default /etc/sudoers file is permitted to run sudo.

Note that these changes do not take effect for any ssh sessions already open. If you have a running session logged in as the user you just added to the sudoers list and you attempt to use sudo from there, it will ask for your password and then fail with this error message:

jsmith is not in the sudoers file. This incident will be reported.

The fix is simple: log out and log back in again. On the new login, the new configuration will be picked up and you will be able to use sudo as intended.

If you would like to do multiple commands from sudo like you could from su, it’s very easy. Simply use sudo to launch a copy of bash and exit after you’re done:

sudo bash

Acer One D260 system restore

The hard disk in my wife’s Acer One D260 netbook got damaged. A new hard disk is about a quarter the price of a new netbook, so I wanted to install a new drive. Like with most PCs these days there aren’t any Windows install DVDs included.

The netbook came with Windows 7 Starter, which we needed to somehow install on the new hard disk. Fortunately, the damaged hard disk was still limping along enough to use the Acer eRecovery system to create two Recovery DVDs. These should allow restoring the initial system state to a hard disk in the machine, wiping all the data on the drive.

To replace the hard disk, I had to undo seven clips around the edge of the keyboard, lift off the keyboard and disconnect the keyboard ribbon cable to the motherboard connector. Then I needed to undo 4 screws underneath and push through, to pop out the cover on the bottom of the machine. This opened access to the single memory slot and drive cage.

The 1 GB memory module on the motherboard can be replaced with a 2 GB PC3-8500 1066MHZ DDR3 module available for about $20. This is a wortwhile investment and I already have the module on order.

I replaced the damaged 250 GB WD Scorpio Blue drive with a spare 500 GB drive (available new for about $60-$80). Then I closed the cover and reinstalled the screws and then the keyboard.

With the new drive it was possible to boot off the first Recovery DVD using a USB DVD drive. The eRecovery software copied data from both DVDs to the hard disks and then rebooted. However, that reboot failed because the new drive did not yet have a Windows Master Boot record (MBR) on it. You can install an MBR from within Windows, but not from the bootable eRecovery DVD. So I had a chicken and egg problem.

I overcame this hurdle by booting off a Ubuntu Live DVD (32 bit), installing the ‘lilo’ package and telling it to install the Linux equivalent of Microsoft’s MBR code:

sudo apt-get install lilo
sudo lilo -M /dev/sda mbr

At the next attempt to boot off the hard disk, Windows started installing its components and drivers and launched into its initial configuration, just like the first time we had unboxed the machine more than two years ago. So we are back to a working Winmdows 7 machine!

Thank you, Linux — you saved my day again! 🙂

Western Digital 4 KB sector drive alignment for Windows XP and 2003 server

If your existing Windows XP or Windows 2003 Server machine needs a new C: drive, there are ways of upgrading to one of the latest drives without a complete software reinstall, but you may encounter some stumbling blocks due to the new Advanced Format technology, which uses 4 KB sectors.

When one of my PCs developed hard disk problems and I had to upgrade one of its drives, I also checked out my other machines. I found the C: drive of a Windows 2003 Server machine was about to fail. Windows 2003 is basically the server version of Windows XP, with which it shares most components. I opted for a 1 TB WD Red drive (WD10EFRX) by Western Digital, since these drives are designed for 24/7 operation, primarily for use in Network Attached Storage (NAS) appliances (desktop drives are only designed for an 8 hours on, 16 hours off use pattern).

I did not want to reinstall everything from scratch on that machine, so I used a Linux boot DVD and the GNU dd utility to mirror the failing drive onto the new WD Red drive (“sudo dd if=/dev/sda of=/dev/sdb”). As a result, all the partitions were in the same place and the same size as on the old drive, a Seagate Barracuda 7200.11 320 GB. The partitions on the old drive had not been aligned on 4 KB boundaries as is recommended to get decent performance on modern Advanced Format drives, so I needed to run an align tool to move the partition to the proper place. Western Digital offers one free to its customers, so that should be easy then, right?

No quite. I encountered all the troubles described by others in this thread: Basically, the download link for the WD Align tool (AcronisAlignTool_s_e_2_0_111.exe) takes you back to the same page, over and over, without error message. It turns out that you need to be registered and logged in to the WD site for the download link to do anything. You need to register both your contact details (name, e-mail address, postal address, phone number) and your hard disk’s serial number. For the latter I had to shut down the machine again and take out the drive once more to take a look, because the number is not printed on the cardboard box, only on the drive itself.

Once I registered my new drive, a download link did appear next to the registered product, but from it I found I could only download Acronis True Image and not the Acronis Align Tool (Advanced Format Software, WD Align). The WD Red series drives are all Advanced Format Drives, as is pretty much every drive made since 2011, but WD say it is designed for NAS use and hence don’t see the need for a fix for what they see as a Windows XP problem.

Various people online recommended a download site in Ukraine that apparently offers a copy of that program, but if you’re downloading from sites like that you risk installing malware on your computer. Beware!

There is a safer solution. I had to register another Western Digital drive, an old WD10EARS to get a usable download link for Advanced Format Software. If you don’t happen to have one lying around, a Google image search for WD10EARS will show you many photographs of disk drives with clearly readable serial numbers on the label. And apparently, these serial numbers will do the trick! 😉

After I downloaded the software, I ran it to make a bootable CD (it also seems to be Linux-based), booted and ran it and 1 hour and 30 minutes later my C: partition was showing up as properly aligned.

I can understand that Western Digital wants to restrict the use of licensed Acronis software to its own customers, denying other brands a free ride. However, the hoops it is making people jump through to be able to use one of their new drives as an upgrade to an existing Windows XP machine is just ridiculous. If a login is required to do the download, it should clearly say so. And if a drive uses 4 KB sectors (Advanced Format), its serial number should qualify you for the download. There are millions of existing XP users out there still and many will need new hard disks before they need a new computer.

Upgrading to a Western Digital WD20EFRX hard disk

All hard disks will die, sooner or later. They only way to avoid that is to retire a drive early enough. Often I upgrade drives because I run out of disk space, and migrate the data to a bigger drive. However, this times it looks like one of my drives is about to die.

Over the last couple of months, one of my PCs that is processing data 24/7 has been seizing up periodically, so I was starting to get suspicious about its hard drives (it has two of them). This week the Windows 7 event viewer reported that NTFS had encountered write errors on the secondary drive. It’s a Samsung SpinPoint F2 EG (Samsung HD154UI, 1.5 TB) which basically has been busy non stop for over three years.

I installed smartmontools for Windows and it showed errors:

ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 099 065 051 Pre-fail Always - 5230
(...)
13 Read_Soft_Error_Rate 0x000e 099 065 000 Old_age Always - 5223
(...)
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 12379
(...)
197 Current_Pending_Sector 0x0012 099 099 000 Old_age Always - 24

“Reported_Uncorrect” are fatal errors and “Current_Pending_Sector” are bad sectors the drive wants to replace with spare sectors as soon as it can. Neither is a good sign. So I have ordered a new drive, started a backup to another machine and will replace the drive with a new disk that I have ordered from Amazon.

The new drive is a 2 TB Western Digital WD20EFRX, which is part of WD’s “Red” series. These drives are specifically designed for 24/7 operation (as opposed for 8/5 office computers). The drive is 0.5 GB bigger, which is just as well as the old drive was getting close to filling up. Gradually I will be moving my processing to an Ubuntu server, which I already use as my main archive machine with a RAID6 drive array.