While living in Croatia in the 80s, I enjoyed plain yogurt every day. After moving to Canada, I was very disappointed with the yogurt here.
Now I can find good yogurt in Canada. It's really quite simple: look at the ingredients. Yogurt is supposed to contain milk ingredients and bacterial culture. It is not supposed to contain various gums, gelatin or corn starch. I suppose factories add that to create consistent texture even when they're unable to make proper yogurt which would have a good texture on its own.
Astro Original is consistently good. Middle Eastern brands like Halal and Phoenicia are also good. Danone usually disappoints.
In Canada you can buy lots of different yogurt with added flavour and/or fruit. I generally avoid those, because they have lots of added sugar or artificial sweeteners. It's better to enhance yogurt with fruit on your own. Sour cherries which were washed and pitted in the summer are great for this. Preparing it is a lot of work one day, but it's kind of fun and after that they're easy to use. I freeze them in small containers which after defrosting last for a week or so.
Don't shy away from fat. Research shows that fat isn't that bad, sugar is far worse, and even artificial sweeteners are unhealthy. Without fat, it's hard to get the right consistency and flavour, and many low fat products add other unhealthy things instead. So, 3% yogurt is fine, and don't assume 0% is better.
The same general advice applies to other milk products like kefir, huslanka and even sour cream.
Sunday, December 18, 2016
Facebook friend request accepted even though I never sent one
A while ago someone I stayed in touch with over 10 years ago appeared in my suggested friends list. That was probably due to a mutual friend. I looked at the suggested friend's page and public posts for a bit, and decided to not send a friend request. Later that day I was told that he accepted my friend request.
I don't think I made a mistake and accidentally sent a friend request. Furthermore, in the friends activity log at https://www.facebook.com/dreamlayers/allactivity?privacy_source=activity_log&log_filter=cluster_8 I see a "became friends" entry, but no "sent a friend request" entry. I just scrolled through the entire log, to the beginning of my time on Facebook, and still see no friend request. For various other people I see a "sent a friend request" entry followed by "became friends".
This isn't a problem and what happened was probably a good thing. So, I'm not complaining. It's just weird and I'm wondering why this happened.
I don't think I made a mistake and accidentally sent a friend request. Furthermore, in the friends activity log at https://www.facebook.com/dreamlayers/allactivity?privacy_source=activity_log&log_filter=cluster_8 I see a "became friends" entry, but no "sent a friend request" entry. I just scrolled through the entire log, to the beginning of my time on Facebook, and still see no friend request. For various other people I see a "sent a friend request" entry followed by "became friends".
This isn't a problem and what happened was probably a good thing. So, I'm not complaining. It's just weird and I'm wondering why this happened.
Monday, December 12, 2016
Notes about unlocking a Vonage VDV23 VoIP box
I got a used Vonage VDV23 VoIP box to play with. Vonage seems expensive and limited compared to other VoIP providers like voip.ms, so I decided to unlock the box for use with other providers. A guide is available, but I ran into various problems trying to follow it, and this was quite an adventure. You can access the guide for free, but need to register at voipfan.net and log in there. The comments here are meant as a supplement to the guide, not a replacement.
Note that buying this device for use with Vonage may be pointless. Vonage will give you a new adapter if you sign up, and may not allow you to use an old one.
Note that buying this device for use with Vonage may be pointless. Vonage will give you a new adapter if you sign up, and may not allow you to use an old one.
Firmware loading via TFTP
Always use the internal phy. It will tell you if autonegotiation worked and you got an Ethernet connection. I never had any success whatsoever when I selected external phy.
Network connectivity from the bootloader sometimes seems terribly unreliable. Pinging the VDV23 only worked once. Other times it at best answered a few pings, often with delays of more than a second. Received pings also spit out error messages to the console. Trying to ping first may even break subsequent TFTP attempts. Don't bother testing the network connection with ping; just try TFTP.
Network connectivity from the bootloader sometimes seems terribly unreliable. Pinging the VDV23 only worked once. Other times it at best answered a few pings, often with delays of more than a second. Received pings also spit out error messages to the console. Trying to ping first may even break subsequent TFTP attempts. Don't bother testing the network connection with ping; just try TFTP.
After trying my main PC and router, I ended up using the Ethernet port on this Inspiron 6400 laptop. Also, I was always setting up the network immediately after startup, not later on via the menu. When I did this, TFTP always worked.
The "Board IP Gateway" must exist, because the VDV23 will perform ARP queries for it when starting TFTP. Yes, it will even do this if the address is within the LAN. I just set it to the TFTP server address.
Getting the Admin password
Ignore the password at BF7F0118. Yes, there seems to be a password there, but I could never log in using it. If the password at BF3D00FA is all zeros, you didn't wait long enough for the VDV23 to download the password from Vonage and configure itself. If you wait too long, it will start upgrading firmware.
The first time I couldn't log in with the password from BF3D00FA either. I'm not sure if the "safe" vdv21-3.2.6-na_boot.bin firmware needs to get the password over the Internet from Vonage again. It may also start upgrading firmware if you wait too long. So, you need to disconnect from the Internet and continue with the rest of the guide.
Configuring SIP
I chose to configure via the XML file. First I tried to use an address accessible from the blue WAN port, but I didn't see any HTTP requests. Then I chose the default http://192.168.15.10 address and it worked via the yellow Ethernet port.
The device always sends an HTTP request soon after booting, so I was just unplugging it and plugging it back in to change the configuration. Seems like it first configures itself using stored parameters. Then if the XML file has different parameters the phone light will go out for a while as it reconfigures itself. Don't take the phone off hook during that process, because it will be testing the line. You may end up with a blinking phone light and a console complaint about the ringer equivalence number (REN) being too high.
Pay attention to the dialPlan lines. The one in the provided XML isn't enough for some voip.ms numbers, leading to a fast busy. I couldn't figure out how to support 2 and 3 character * codes except using *xx.T, which waits for a timeout before proceeding with the call. For experimenting, I could reconfigure dialPlan once from the console and see immediate results.
I forwarded port 8660 because it is localPort in the XML, but am not sure if that was necessary. The device is meant to be directly connected to the Internet and act as your gateway, providing NAT while prioritizing VoIP traffic. Since I'm still playing with it, I'm not going to do such a drastic change to my network.
Tuesday, December 06, 2016
How to set the MAC address when connecting to wireless using /etc/network/interfaces
If you try to set the mac address using
hwaddress ether xx:xx:xx:xx:xx:xx
in /etc/network/interfaces
, that fails. It seems ifup
tries to set the address after connecting to WiFi. The address needs to be set before, while the interface is down. If you need the MAC address to connect, you won't connect, and if you don't need it and connect, you'll get RTNETLINK answers: Device or resource busy
at the end. The solution is to use a pre-up ifconfig wlan0 hw ether xx:xx:xx:xx:xx:xx
line instead of the hwaddress ether
line.
Monday, December 05, 2016
Exiftran for Windows
The Windoze 10 Photos app doesn’t actually rotate JPEG files, and instead just sets the EXIF orientation. I want my photos to always be displayed in the correct orientation, so this is not acceptable, because the flag is ignored in many situations. Photos can be rotated with jpegtran, but my camera photos also contain a JPEG thumbnail in the Exif data, which also needs to be rotated. I used exiftran for this in Linux and I cross-compiled it for Win32 from Linux using i686-w64-mingw32-gcc. Download the build directory here. I'm distributing the whole thing because of the GPL. To use exiftran, rename exiftran to exiftran.exe and run it. For example, to rotate a file by 90 degrees in place, replacing it with a losslessly rotated version, use "exiftran -9i file".
Monday, November 28, 2016
Switching from IDE to AHCI on an ICH7-M based laptop
Many older laptops have SATA controllers and SATA drives, but set up the chipset in IDE mode instead of AHCI mode. Compared to IDE mode, AHCI can provide faster transfer rates, NCQ and power savings. It may not be possible to select AHCI mode in the BIOS, but it can be done by writing to PCI configuration registers directly. The actual IDE (PATA) ports are not supported in AHCI mode, so a PATA DVD drive would not be usable when AHCI is enabled.
You can read and write PCI registers using
According to the ICH-7 datasheet PDF, you need to set the 8-bit MAP—Address Map Register at offset 0x90 to 0x40. (A patch I had before also set the SCRAE bit in the register at offset 0x94, but that just provides access to some AHCI registers when AHCI is not enabled, and is irrelevant when AHCI is enabled.) This means the following GRUB command is needed.
You can read and write PCI registers using
setpci
in Linux, but if you want to make this change, it must be done before the device is detected. One way to do it is via a kernel patch, but recompiling the kernel every time for updates is kind of annoying. Another way is via GRUB's setpci
command. This is very easy, and works well, but breaks resume from sleep, because GRUB doesn't run then and the kernel will encounter the device in IDE mode.How to do this via GRUB
First, you need to find your controller. Uselspci
and look at the output. On my Inspiron 6400 I find "00:1f.2 IDE interface: Intel Corporation 82801GBM/GHM (ICH7-M Family) SATA Controller [IDE mode] (rev 01)
". The 00:1f.2
is the device's identifier in Linux, and lspci -n | grep 00:1f.2
shows "00:1f.2 0101: 8086:27c4 (rev 01)
" This gives you the PCI ID of the device: 8086:27c4.According to the ICH-7 datasheet PDF, you need to set the 8-bit MAP—Address Map Register at offset 0x90 to 0x40. (A patch I had before also set the SCRAE bit in the register at offset 0x94, but that just provides access to some AHCI registers when AHCI is not enabled, and is irrelevant when AHCI is enabled.) This means the following GRUB command is needed.
setpci -d 8086:27c4 90.b=40
You can press e when the GRUB menu is displayed, and add it to the end of the command that boots Linux. Adding it earlier caused a hang, probably because the BIOS cannot handle AHCI mode, and so it needs to be set after the kernel and initrd are loaded. After finding this works, I added an /etc/grub.d/42_ahci
file with a stripped down Linux menu entry taken from /boot/grub/grub.cfg
. It would be better if I could call the Ubuntu menu entry as a subroutine, but that doesn't seem possible with GRUB.#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
menuentry 'Ubuntu AHCI' --class ubuntu --class gnu-linux --class gnu --class os {
recordfail
insmod ext2
set root='hd0,msdosPARTITION_NUMBER_HERE'
linux /vmlinuz root=UUID=PUT_UUID_HERE ro resume=SWAP_PARTITION
initrd /initrd.img
setpci -d 8086:27c4 90.b=40
}
The Results
This change increases
sudo nice -n -19 dd if=/dev/sda bs=1M of=/dev/null count=1000
speed from 128 MB/s to 141 MB/s with a PNY SSD2SC120G1CS1754D117-551 SSD. It also enables NCQ, and causes the DVD drive to not be detected at all. Here are the IDE lines which disappear from dmesg. Note that this is based on diff
output and is just a list of lines in order, not a continuous section.
pci 0000:00:1f.2: [8086:27c4] type 00 class 0x010180
pci 0000:00:1f.2: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7]
pci 0000:00:1f.2: legacy IDE quirk: reg 0x14: [io 0x03f6]
pci 0000:00:1f.2: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177]
pci 0000:00:1f.2: legacy IDE quirk: reg 0x1c: [io 0x0376]
ata_piix 0000:00:1f.2: version 2.13
ata_piix 0000:00:1f.2: MAP [ P0 P2 IDE IDE ]
scsi host0: ata_piix
scsi host1: ata_piix
ata1: SATA max UDMA/133 cmd 0x1f0 ctl 0x3f6 bmdma 0xbfa0 irq 14
ata2: PATA max UDMA/100 cmd 0x170 ctl 0x376 bmdma 0xbfa8 irq 15
ata2.00: ATAPI: SONY DVD+/-RW DW-Q58A, UDS2, max UDMA/33
ata1.00: 234441648 sectors, multi 2: LBA48 NCQ (depth 0/32)
ata2.00: configured for UDMA/33
scsi 1:0:0:0: CD-ROM SONY DVD+-RW DW-Q58A UDS2 PQ: 0 ANSI: 5
sr 1:0:0:0: [sr0] scsi3-mmc drive: 24x/24x writer cd/rw xa/form2 cdda tray
cdrom: Uniform CD-ROM driver Revision: 3.20
sr 1:0:0:0: Attached scsi CD-ROM sr0
sr 1:0:0:0: Attached scsi generic sg1 type 5
Here are the new lines which appear instead:
pci 0000:00:1f.2: [8086:27c5] type 00 class 0x010601
pci 0000:00:1f.2: reg 0x24: [mem 0x00000000-0x000003ff]
pci 0000:00:1f.2: BAR 5: assigned [mem 0x80000000-0x800003ff]
ahci 0000:00:1f.2: version 3.0
ahci 0000:00:1f.2: enabling device (0005 -> 0007)
ahci 0000:00:1f.2: forcing PORTS_IMPL to 0xf
ahci 0000:00:1f.2: SSS flag set, parallel bus scan disabled
ahci 0000:00:1f.2: AHCI 0001.0100 32 slots 4 ports 1.5 Gbps 0xf impl SATA mode
ahci 0000:00:1f.2: flags: 64bit ncq ilck stag pm led clo pmp pio slum part
scsi host0: ahci
scsi host1: ahci
scsi host2: ahci
scsi host3: ahci
ata1: SATA max UDMA/133 abar m1024@0x80000000 port 0x80000100 irq 27
ata2: SATA max UDMA/133 abar m1024@0x80000000 port 0x80000180 irq 27
ata3: SATA max UDMA/133 abar m1024@0x80000000 port 0x80000200 irq 27
ata4: SATA max UDMA/133 abar m1024@0x80000000 port 0x80000280 irq 27
ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
ata1.00: 234441648 sectors, multi 2: LBA48 NCQ (depth 31/32), AA
ata2: failed to resume link (SControl 0)
ata2: SATA link down (SStatus 0 SControl 0)
ata3: SATA link down (SStatus 0 SControl 300)
ata4: failed to resume link (SControl 0)
ata4: SATA link down (SStatus 0 SControl 0)
A better solution
Another alternative, which can support suspend to RAM, is to set registers from an altered DSDT. Read about the DSDT changes here and read about how to use a custom DSDT in Linux here. You first need to dump your DSDT, decompile it and fix it so it can compile properly. People do this sort of thing for running Mac OS on a PC, and you can find plenty of info on sites relating to that. Then you can add your modifications. In Ubuntu, simply copy the compileddsdt.aml
to /boot
and run update-grub
.Tuesday, November 22, 2016
Sharing an IR receiver via an opto-isolator
I wanted to share an infrared remote receiver between the Pace DC550D cable box and Raspberry Pi 2 B running Kodi. This way there is no need to mount another visible IR receiver. It turned out to be a harder than anticipated because I used the wrong opto-isolator.
The Pace DC550D, like many devices, connects to the IR receiver via a standard headphone jack. The tip supplies 5 V power, the ring (middle contact) is the signal output, and the sleeve is ground. This arrangement makes the most sense, but I don't know if every other device uses it.
The signal stays at just over 3 V normally and goes low when IR is detected. This might be because it connects to 3.3 V device. With an open collector output, the pull-up resistor does not need to connect to the same voltage as the device's power supply. So, an IR receiver powered by 5 V and with the pull-up resistor connected to 3.3 V will properly interface with 3.3 V devices.
I connected the opto-isolator LED between 5 V power and signal output. This meant the LED would come on when a signal is detected. This seems better because it is drawing power through the IR receiver output, not the pull-up resistor. Though there is a problem because the off state is just over 3 V and LED voltage drop is only 1 V. I added 3 diodes in series to deal with this. A current limiting resistor is also required for the LED.
At first I tried to just connect the opto-isolator output to ground and a GPIO pin, and use the Raspberry Pi's internal pull-up. First, this did not work at all, because the lirc_rpi module parameter for setting that is ignored. Pull up can either be enabled by running a short Python program every time, or by by adding a
The problem was the turn-off time of the optocoupler. I chose a TIL113 with photodarlington output, so that I would need to draw less power from the IR receiver. However, photodarlington opto-isloators have very slow shutoff. Adding a base resistor didn't help much, and even with a pull-up to 3.3 V on the output, LIRC was unreliable. Eventually I changed to a PS2010 with a just a normal phototransistor. Due to its much lower current transfer ratio, I had to carefully choose LED current limiting and output pull-up resistors. The output pull-up was still necessary; the Raspberry Pi's internal pull-up did not give a good enough signal.
IR Receivers
An IR receiver has 3 connections: ground, power and signal output. With many IR receivers, the output is open collector and active low. This means when there is no remote signal detected, the output is not driven, and when a signal is detected, it is driven low. In order to actually see a voltage change at the output, a pull-up resistor is needed. Then the output stays high with no signal and goes low when a signal is detected.The Pace DC550D, like many devices, connects to the IR receiver via a standard headphone jack. The tip supplies 5 V power, the ring (middle contact) is the signal output, and the sleeve is ground. This arrangement makes the most sense, but I don't know if every other device uses it.
The signal stays at just over 3 V normally and goes low when IR is detected. This might be because it connects to 3.3 V device. With an open collector output, the pull-up resistor does not need to connect to the same voltage as the device's power supply. So, an IR receiver powered by 5 V and with the pull-up resistor connected to 3.3 V will properly interface with 3.3 V devices.
Opto-isolation
Accessing the IR receiver signal is easy via headphone splitter and headphone plug. It ought to be possible to connect the IR receiver directly to the Raspberry Pi. Everything is grounded together via HDMI, though also connecting grounds would be good. However, I wanted opto-isolation for protection.I connected the opto-isolator LED between 5 V power and signal output. This meant the LED would come on when a signal is detected. This seems better because it is drawing power through the IR receiver output, not the pull-up resistor. Though there is a problem because the off state is just over 3 V and LED voltage drop is only 1 V. I added 3 diodes in series to deal with this. A current limiting resistor is also required for the LED.
At first I tried to just connect the opto-isolator output to ground and a GPIO pin, and use the Raspberry Pi's internal pull-up. First, this did not work at all, because the lirc_rpi module parameter for setting that is ignored. Pull up can either be enabled by running a short Python program every time, or by by adding a
dtparam=gpio_in_pull=up
line after the dtoverlay=lirc-rpi
line in /boot/config.txt
. With the pull-up enabled, I got a signal, but it was useless.The problem was the turn-off time of the optocoupler. I chose a TIL113 with photodarlington output, so that I would need to draw less power from the IR receiver. However, photodarlington opto-isloators have very slow shutoff. Adding a base resistor didn't help much, and even with a pull-up to 3.3 V on the output, LIRC was unreliable. Eventually I changed to a PS2010 with a just a normal phototransistor. Due to its much lower current transfer ratio, I had to carefully choose LED current limiting and output pull-up resistors. The output pull-up was still necessary; the Raspberry Pi's internal pull-up did not give a good enough signal.
The Results
This seems fully reliable now. I'm using a Harmony 300 remote, with the device set to Microsoft Media Player [Microsoft Xbox Media Center Extender]. It maps to the mceusb LIRC remote. The MyHarmony software doesn't configure all the possible keys, so it is necessary to edit the key configuration. The Prev button, corresponding to KEY_BACK is especially important for the Kodi interface.Monday, November 21, 2016
Connecting to WiFi using Debian's /etc/network/interfaces
It's fairly easy to connect to a wireless network from the command line using
You only need
/etc/network/interfaces
if you know what you're doing. When connecting to a network with WPA or WPA2 encryption, you will need wpa_supplicant
, but you don't need to create a conf file. Instead, options corresponding to wpa_supplicant.conf
options can be placed directly in /etc/network/interfaces
. Those options aren't documented in the man page, but you can find them in /usr/share/doc/wpasupplicant/examples/wpa_supplicant.conf.gz
. Add a wpa-
prefix and change the underscore to a dash. For example, scan_ssid
in wpa_supplicant.conf
becomes wpa-scan-ssid
in /etc/network/interfaces
. Something like this was suggested elsewhere.
iface wlan0 inet dhcp
wpa-scan-ssid 1
wpa-ap-scan 1
wpa-key-mgmt WPA-PSK
wpa-proto RSN WPA
wpa-pairwise CCMP TKIP
wpa-group CCMP TKIP
wpa-ssid "<your ssid>"
wpa-psk "<your password>
You only need
wpa-scan-ssid 1
if you want to connect to a non-broadcasting network. Some of the other parameters have sensible defaults. Note that RSN is WPA2 and CCMP is the most secure encryption. The wpa-ssid
quotes will be stripped, so you can use a SSID with leading and/or trailing spaces. This is not possible with wireless-essid
.
You can also use other
/etc/network/interfaces
options, like an allow-hotplug wlan0
line so the interface gets configured when you plug in the dongle, or a static IP instead of DHCP. The interfaces(5)
man page only lists a few of the options. Others are provided by shell scripts installed by other packages, like /etc/wpa_supplicant/functions.sh
for the wpa-
options.
Once everything is done, you can use
sudo ifup wlan0
to connect. For some reason, that command can become unresponsive to attempts to kill it, and the ifup
process may need to be killed elsewhere. Use sudo ifdown wlan0
to disconnect.Thursday, November 17, 2016
Using Pidgin and purple-facebook for Facebook chat may also cause problems. App passwords may help.
Earlier I had problems with the Miranda NG Facebook plugin causing captchas on Facebook. I would get captchas even when posting and sharing very obviously innocent stuff. The plugin would get updated, and captchas would stop, but then they would start again later. So, I gave up on that and started using Pidgin with purple-facebook in Windows as well.
I had been using Pidgin with Purple Facebook in Linux for a while, but when I started using it in Windows, Facebook reacted as if my account had been hacked. I was forced to change my password, and examine recent activity. This problem repeated. I don't know if it's related to use of purple-facebook in Windows or use of it in too many places or a newer version or what. I reported the bug, and other users responded that they also encountered the problem.
Facebook has an app passwords feature which allows you to auto-generate passwords and use those passwords for apps instead of your main Facebook password. I'm now using that feature, with separate app passwords for every different Pidgin installation. So far so good.
Oh, and by the way, if Facebook forces you to change your password, don't bother trying to re-use the old password. They seem to blacklist the passwords used when their system thinks your account was hacked.
I had been using Pidgin with Purple Facebook in Linux for a while, but when I started using it in Windows, Facebook reacted as if my account had been hacked. I was forced to change my password, and examine recent activity. This problem repeated. I don't know if it's related to use of purple-facebook in Windows or use of it in too many places or a newer version or what. I reported the bug, and other users responded that they also encountered the problem.
Facebook has an app passwords feature which allows you to auto-generate passwords and use those passwords for apps instead of your main Facebook password. I'm now using that feature, with separate app passwords for every different Pidgin installation. So far so good.
Oh, and by the way, if Facebook forces you to change your password, don't bother trying to re-use the old password. They seem to blacklist the passwords used when their system thinks your account was hacked.
How to obtain higher resolution photos by altering the image URL
Often websites scale down the photos which they display. If you carefully examine the image URL and change it, you may be able to increase the resolution and quality. Look for parameters which specify the size and try to change them or strip them away. Also, if the photo URL refers to another path, try to directly access that other path.
You can often get an image URL by right clicking on the image. If that does not work, use your browser's developer tools or the media tab of page info in Firefox. The inspect feature of developer tools should make finding the image fairly easy.
Here are a few examples of these transformations:
http://wpmedia.windsorstar.com/2016/11/1d1.jpg?quality=55&strip=all&w=840&h=630&crop=1
http://wpmedia.windsorstar.com/2016/11/1d1.jpg
This was pretty obvious. Remove the parameters and get the original image size. It also seems to avoid JPEG re-encoding, and the accompanying increase in artifacts.
https://i.cbc.ca/1.3929515.1484163948!/fileImage/httpImage/image.jpg_gen/derivatives/16x9_460/pillar-lights.jpg
https://i.cbc.ca/1.3929515.1484163948!/fileImage/httpImage/image.jpg
The parameters do not necessarily start with a question mark.
http://www.amherstburg.ca/ThumbGen.ashx?%2fAreas%2fCustom%2fContentFiles%2fportrait+of+ferry+and+shoreline.png/600/452/w
http://www.amherstburg.ca/Areas/Custom/ContentFiles/portrait%20of%20ferry%20and%20shoreline.png
In this case, you see another path given as a parameter to ThumbGen.ashx. You need to remove the encoding of that path. Note that the path is encoded with percent encoding and + instead of spaces.
http://i2.cdn.cnn.com/cnnnext/dam/assets/161117110259-dutch-war-ship-exlarge-169.jpg
http://i2.cdn.cnn.com/cnnnext/dam/assets/161117110259-dutch-war-ship.jpg
In this case part of the "file name" was the culprit, and the change removes cropping and text.
https://images-na.ssl-images-amazon.com/images/I/51dIUaKqkYL._SY500_.jpg
https://images-na.ssl-images-amazon.com/images/I/51dIUaKqkYL.jpg
Again, the last part of the file name before the .jpg had to be removed.
https://c1.staticflickr.com/6/5722/22981183464_8c7035afbb_b.jpg
https://c1.staticflickr.com/6/5722/22981183464_8c7035afbb_h.jpg
For some sites, there are site-specific tricks which are not self-explanatory. On Flickr, try changing the last letter after the underscore to h. In the past o was needed for some photos and may still be needed sometimes.
http://s7d5.scene7.com/is/image/CanadianTire/0431755_1?defaultImage=image_na_EN&wid=160&hei=160&op_sharpen=1
http://s7d5.scene7.com/is/image/CanadianTire/0431755_1?defaultImage=image_na_EN&wid=1600&hei=1600&op_sharpen=1
In this case, removing the parameters returns the default small size, but you can change the parameters to increase image size. Note that it would be almost entirely pointless to increase the size beyond the original size. You would just get a larger file without added detail. On some sites, if you specify a size that's larger than the original size, you will get the original size.
If you are not satisfied with what you can obtain at one site, then try Google search by image or TinEye. Remember that even if the search engine doesn't find bigger images, another site may allow you to download a larger image by modifying the URL.
You can often get an image URL by right clicking on the image. If that does not work, use your browser's developer tools or the media tab of page info in Firefox. The inspect feature of developer tools should make finding the image fairly easy.
Here are a few examples of these transformations:
http://wpmedia.windsorstar.com/2016/11/1d1.jpg?quality=55&strip=all&w=840&h=630&crop=1
http://wpmedia.windsorstar.com/2016/11/1d1.jpg
This was pretty obvious. Remove the parameters and get the original image size. It also seems to avoid JPEG re-encoding, and the accompanying increase in artifacts.
https://i.cbc.ca/1.3929515.1484163948!/fileImage/httpImage/image.jpg_gen/derivatives/16x9_460/pillar-lights.jpg
https://i.cbc.ca/1.3929515.1484163948!/fileImage/httpImage/image.jpg
The parameters do not necessarily start with a question mark.
http://www.amherstburg.ca/ThumbGen.ashx?%2fAreas%2fCustom%2fContentFiles%2fportrait+of+ferry+and+shoreline.png/600/452/w
http://www.amherstburg.ca/Areas/Custom/ContentFiles/portrait%20of%20ferry%20and%20shoreline.png
In this case, you see another path given as a parameter to ThumbGen.ashx. You need to remove the encoding of that path. Note that the path is encoded with percent encoding and + instead of spaces.
http://i2.cdn.cnn.com/cnnnext/dam/assets/161117110259-dutch-war-ship-exlarge-169.jpg
http://i2.cdn.cnn.com/cnnnext/dam/assets/161117110259-dutch-war-ship.jpg
In this case part of the "file name" was the culprit, and the change removes cropping and text.
https://images-na.ssl-images-amazon.com/images/I/51dIUaKqkYL._SY500_.jpg
https://images-na.ssl-images-amazon.com/images/I/51dIUaKqkYL.jpg
Again, the last part of the file name before the .jpg had to be removed.
https://c1.staticflickr.com/6/5722/22981183464_8c7035afbb_b.jpg
https://c1.staticflickr.com/6/5722/22981183464_8c7035afbb_h.jpg
For some sites, there are site-specific tricks which are not self-explanatory. On Flickr, try changing the last letter after the underscore to h. In the past o was needed for some photos and may still be needed sometimes.
http://s7d5.scene7.com/is/image/CanadianTire/0431755_1?defaultImage=image_na_EN&wid=160&hei=160&op_sharpen=1
http://s7d5.scene7.com/is/image/CanadianTire/0431755_1?defaultImage=image_na_EN&wid=1600&hei=1600&op_sharpen=1
In this case, removing the parameters returns the default small size, but you can change the parameters to increase image size. Note that it would be almost entirely pointless to increase the size beyond the original size. You would just get a larger file without added detail. On some sites, if you specify a size that's larger than the original size, you will get the original size.
If you are not satisfied with what you can obtain at one site, then try Google search by image or TinEye. Remember that even if the search engine doesn't find bigger images, another site may allow you to download a larger image by modifying the URL.
Thursday, September 01, 2016
Dropbox is not a reasonable way to publicly share files anymore
I first used Drop.io to share files associated with blog posts. When Facebook bought it and shut it down, I switched to using Dropbox. At the time, many people liked Dropbox, and I found it worked well. It's nice to be able to upload a file for public sharing by simply copying it to my Dropbox folder.
Today I logged in to change my password because of the big Dropbox security breach, and found this:
This is ridiculous because of how little information it provides. They don't tell you the bandwidth limit, how much you used, what links used a lot of bandwidth or when access could be restored. They don't even really tell you whether it was due to bandwidth or some other kind of abuse. I'm actually only using 280 MB of space total, the public folder is 205 MB, and files are generally small. So, I guess either the bandwidth limit is extremely low, or some files had become very popular.
I'm not going to pay money to distribute files to others for free. Also, I'm especially not going to pay money to a company which treats its customers like this. So, now I should find some other service, move my files over, and go through my blog and change the links. Changing links is going to be a lot of work, and I don't feel that's worthwhile. So, maybe another day. Suggestions for what service to use are welcome.
By the way, I recently got an e-mail from Dropbox saying that HTML documents will stop rendering in the browser. I don't think I'm sharing any HTML documents, but if I am that means they would need to be downloaded and then viewed. That's another reason to not use Dropbox. They seem to be pushing Dropbox Paper really hard now, with all the e-mail I've been getting about that, and this may be an attempt to get people to use that for documents.
Today I logged in to change my password because of the big Dropbox security breach, and found this:
This is ridiculous because of how little information it provides. They don't tell you the bandwidth limit, how much you used, what links used a lot of bandwidth or when access could be restored. They don't even really tell you whether it was due to bandwidth or some other kind of abuse. I'm actually only using 280 MB of space total, the public folder is 205 MB, and files are generally small. So, I guess either the bandwidth limit is extremely low, or some files had become very popular.
I'm not going to pay money to distribute files to others for free. Also, I'm especially not going to pay money to a company which treats its customers like this. So, now I should find some other service, move my files over, and go through my blog and change the links. Changing links is going to be a lot of work, and I don't feel that's worthwhile. So, maybe another day. Suggestions for what service to use are welcome.
By the way, I recently got an e-mail from Dropbox saying that HTML documents will stop rendering in the browser. I don't think I'm sharing any HTML documents, but if I am that means they would need to be downloaded and then viewed. That's another reason to not use Dropbox. They seem to be pushing Dropbox Paper really hard now, with all the e-mail I've been getting about that, and this may be an attempt to get people to use that for documents.
Monday, May 23, 2016
Roast peanuts at a lower temperature for a more complex flavour
Online instructions say that peanuts which aren't in a shell should be roasted for 15 to 20 minutes at 350°F, corresponding to 177°C. That can give good results, but roasting at 150°C (302°F) for 25 minutes is better. The lower temperature retains more of the raw peanut flavour while removing the "green" aspect of raw peanut flavour. It also creates plenty of roasted peanut flavour. The result has the best of both the raw and roasted peanut flavours. The lower temperature also doesn't burn the thin red peanut skins, so they don't become bitter like at 350°F.
In either case, peanuts should be one layer deep, and they need to be stirred a few times while they're roasting. Fresh roasted peanuts taste noticeably better than commercially roasted peanuts.
Monday, May 16, 2016
Bash on Ubuntu on Windows 10 is fast, but Cygwin integrates better with Windows
Up to now, I used Cygwin. Having it was very important, because it allowed me to do some things which would otherwise require booting into Linux or running a Linux VM. Cygwin is quite good in terms of capability. Problems are rare, there are many packages available, and it's often not too hard to compile other Linux software in Cygwin. Its main disadvantage is that some things are a lot slower. Running
There were faster alternatives available, like MinGW, Unix tools directly compiled for Windows and Microsoft's Subsystem for UNIX-based applications (SUA). However, there were more compatibility issues with those, so although program performance is better, I would spend more more time dealing with compatibility issues.
Bash on Ubuntu on Windows excited me, and motivated me to give Windows 10 another chance. It would allow even better compatibility with Linux applications without requiring a VM, and could offer better performance. After upgrading to the Fast Ring Insider Preview, it was easy to install. It works impressively well, and is very fast. Various things are still unimplemented, but most aren't very important. The most important missing feature is lack of fully working pseudoterminal support, which prevents X terminal apps from working.
Unfortunately, Bash integrates poorly with Windows, and you almost might as well be running a Linux VM. Ubuntu files are in directories under
When I was using Cygwin, I would use native Windows editors, and launch them from the Cygwin command line. Whenever I wanted a GUI view of a folder, I would launch Explorer. The difference between Windows and Cygwin paths is a bit of a problem, but I set up aliases to help:
All of that is impossible in Bash on Ubuntu on Windows, although it shouldn't be too hard to make a way to launch Windows executables from Bash. It's also impossible to compile things like Python extensions which use Windows features, and I don't see how that could be made possible. To do it you would need another separate Windows or Cygwin installation of Python. One thing you can do already is cross-compile for Windows, because you can cross-compile from Linux.
I'm sure Bash on Ubuntu on Windows is going to improve. This is just a preview release. However, I'm wondering about the extent to which its design will prevent it from integrating smoothly with Windows.
./configure
shows the biggest slowdown compared to Linux, but even building via make
is much slower. I think the main issue is that process creation is slower due to the more heavyweight nature of Windows process plus the tricks Cygwin does to simulate Unix processes. It can be tolerable though, and I used Cygwin for a lot of Rockbox development.There were faster alternatives available, like MinGW, Unix tools directly compiled for Windows and Microsoft's Subsystem for UNIX-based applications (SUA). However, there were more compatibility issues with those, so although program performance is better, I would spend more more time dealing with compatibility issues.
Bash on Ubuntu on Windows excited me, and motivated me to give Windows 10 another chance. It would allow even better compatibility with Linux applications without requiring a VM, and could offer better performance. After upgrading to the Fast Ring Insider Preview, it was easy to install. It works impressively well, and is very fast. Various things are still unimplemented, but most aren't very important. The most important missing feature is lack of fully working pseudoterminal support, which prevents X terminal apps from working.
Unfortunately, Bash integrates poorly with Windows, and you almost might as well be running a Linux VM. Ubuntu files are in directories under
%LOCALAPPDATA%\lxss
, with the root directory hidden at %LOCALAPPDATA%\lxss\rootfs
and your home folder under %LOCALAPPDATA%\lxss\home
. However, if you try to use ordinary Windows apps to access files there, you run into problems. Ubuntu programs don't see what you put there, and files which are newly modified in Ubuntu are inaccessible until you leave Bash. If you want to share files, you need to use one of the Windows drives, via /mnt/c
or similar. Unix permissions don't work there. Using that is similar to how a VM can mount folders from the host.When I was using Cygwin, I would use native Windows editors, and launch them from the Cygwin command line. Whenever I wanted a GUI view of a folder, I would launch Explorer. The difference between Windows and Cygwin paths is a bit of a problem, but I set up aliases to help:
function detach {
( nohup "$@" < /dev/null > /dev/null 2>&1 & disown )
}
function detach_c2w()
{
detach "$1" "$(cygpath -aw "$2")"
}
startfunc()
{
/cygdrive/c/Windows/system32/cmd.exe /c start \"Title\" "$(cygpath -aw "$1")"
}
alias edit="detach_c2w /cygdrive/c/Program\ Files/Geany/bin/geany.exe"
alias fc=/cygdrive/c/Windows/system32/fc.exe
alias open=startfunc
explorerfunc()
{
if [ -d "$1" ]; then
/cygdrive/c/Windows/explorer.exe "$(cygpath -aw "$1")"
else
/cygdrive/c/Windows/explorer.exe "/select,$(cygpath -aw "$1")"
fi
}
alias explorer=explorerfunc
All of that is impossible in Bash on Ubuntu on Windows, although it shouldn't be too hard to make a way to launch Windows executables from Bash. It's also impossible to compile things like Python extensions which use Windows features, and I don't see how that could be made possible. To do it you would need another separate Windows or Cygwin installation of Python. One thing you can do already is cross-compile for Windows, because you can cross-compile from Linux.
I'm sure Bash on Ubuntu on Windows is going to improve. This is just a preview release. However, I'm wondering about the extent to which its design will prevent it from integrating smoothly with Windows.
Thursday, May 12, 2016
Windows 10 may not be better than 7 for desktop use, but it is okay
Windows 8 seemed like a ridiculous sacrifice of desktop usability in an attempt to create a desktop, tablet and mobile hybrid. I didn't even feel a need to give it a chance, because it obviously sucked. Windows 8.1 didn't seem like a sufficient improvement. When I installed Windows 10 preview builds on my laptop, they were very unstable and still a mess due to various features transitioning to the Modern (Metro) user interface.
Ubuntu in Windows 10 made me want to try Windows 10 again. This time, Windows 10 seems usable. I could still list ways in which Windows 7 is better, but most of those are not a big problem. The duplication of features between the classic and modern interface may seem ridiculous, but now the Modern UI is more complete and usable. Lack of decoration in the Modern UI seemed ridiculous in screenshots, but after using it I find it remarkably okay and unoffensive. Its only big problem is bad text rendering in some apps, without subpixel anti-aliasing.
Windows 10 doesn't seem like a big improvement over Windows 7 in terms of desktop usability. Microsoft probably understands this, and offers free upgrades because of it. Otherwise, not profiting from upgrades would be ridiculous. Windows 10 mainly exists as a change of direction, still trying to unify the desktop with tablet and mobile interfaces, and trying to move applications to the Windows Store for profit. The aim is to create future profit, via smartphones, tablets and the Windows Store. I don't know if that part will be successful and worthwhile. However, the change in direction for the Windows desktop is finally succeeding. The interface seems like an alpha version in some respects, but it is usable. Many people are choosing to upgrade, or tolerating unintended Windows 10 upgrades.
There definitely are technological upgrades "under the hood". Windows 10 performs well despite running more services. It has some security improvements. It is clear however that upgrading to Windows 10 isn't going to make your applications run significantly faster generally. What Microsoft says about security seems more like persuasion to upgrade than a good argument, with no evidence of Windows 7 being successfully attacked a lot more frequently.
The most alarming changes in Windows 10 are those which reduce privacy and freedom. When you run Windows 10 you send who knows what to Microsoft. However, in practice this does not really affect you. People willingly give up privacy because they don't see real consequences. The decrease in control, for example with updates, might actually be a good thing. Those who really know what they're doing are still free to do whatever, and those who don't will find it harder to cause themselves problems.
I can't really say that I recommend upgrading from Windows 7 to Windows 10. The best I can say is that it's okay to upgrade. It's a good idea to take advantage of the upgrade offer for future use if it's really ending in late July, but you might want to go back to Windows 7 for now.
Ubuntu in Windows 10 made me want to try Windows 10 again. This time, Windows 10 seems usable. I could still list ways in which Windows 7 is better, but most of those are not a big problem. The duplication of features between the classic and modern interface may seem ridiculous, but now the Modern UI is more complete and usable. Lack of decoration in the Modern UI seemed ridiculous in screenshots, but after using it I find it remarkably okay and unoffensive. Its only big problem is bad text rendering in some apps, without subpixel anti-aliasing.
Windows 10 doesn't seem like a big improvement over Windows 7 in terms of desktop usability. Microsoft probably understands this, and offers free upgrades because of it. Otherwise, not profiting from upgrades would be ridiculous. Windows 10 mainly exists as a change of direction, still trying to unify the desktop with tablet and mobile interfaces, and trying to move applications to the Windows Store for profit. The aim is to create future profit, via smartphones, tablets and the Windows Store. I don't know if that part will be successful and worthwhile. However, the change in direction for the Windows desktop is finally succeeding. The interface seems like an alpha version in some respects, but it is usable. Many people are choosing to upgrade, or tolerating unintended Windows 10 upgrades.
There definitely are technological upgrades "under the hood". Windows 10 performs well despite running more services. It has some security improvements. It is clear however that upgrading to Windows 10 isn't going to make your applications run significantly faster generally. What Microsoft says about security seems more like persuasion to upgrade than a good argument, with no evidence of Windows 7 being successfully attacked a lot more frequently.
The most alarming changes in Windows 10 are those which reduce privacy and freedom. When you run Windows 10 you send who knows what to Microsoft. However, in practice this does not really affect you. People willingly give up privacy because they don't see real consequences. The decrease in control, for example with updates, might actually be a good thing. Those who really know what they're doing are still free to do whatever, and those who don't will find it harder to cause themselves problems.
I can't really say that I recommend upgrading from Windows 7 to Windows 10. The best I can say is that it's okay to upgrade. It's a good idea to take advantage of the upgrade offer for future use if it's really ending in late July, but you might want to go back to Windows 7 for now.
Sunday, May 08, 2016
Miranda NG Facebook plugin seems fixed in the development version
For a long time I was getting captchas when posting links in Facebook due to Miranda NG. Then I switched to the development branch so I could use the SkypeWeb plugin and ran into even worse problems.
Since then, the problem has been fixed. I have been using the Facebook plugin for over a month and I'm not getting any captcha requests on Facebook.
If you still don't want to use it, consider Pidgin with the purple-facebook plugin. It is different, using the protocol used by Facebook Messenger instead of the web interface.
Since then, the problem has been fixed. I have been using the Facebook plugin for over a month and I'm not getting any captcha requests on Facebook.
If you still don't want to use it, consider Pidgin with the purple-facebook plugin. It is different, using the protocol used by Facebook Messenger instead of the web interface.
Saturday, May 07, 2016
Western Digital's external drive division seems incompetent. Do not buy their drives!
Last fall I got a 5 TB Western Digital My Book drive, WDBFJK0050HBK-NESN. Looking at SMART data, I saw that attribute 192, Emergency Retract Count is increasing despite power not being cut to the enclosure.
The enclosure cuts power to the drive when the computer goes to sleep, the enclosure's sleep timer expires, or USB is unplugged. According to Western Digital support, it sends a standby immediate command before cutting power, which is the right thing to do. The problem seems to be that the enclosure cuts power too soon after the command, without giving the drive enough time to finish unloading the heads. If the drive gets a standby immediate command while the heads are unloaded due to the 8 second idle timer, it will first load and then unload heads, and that takes some time.
An emergency retract is more violent than a normal controlled retract performed while the drive has power. Heads are retracted using power generated from the disk platters' inertia. It wears out the drive more than a normal retract.
It's surprising that this problem exists in the first place. It is a Western Digital enclosure sold with a Western Digital drive inside. Surely they should know how long they need to wait after a standby immediate command before cutting power! The worst problem though is that they're not fixing this. I opened a support request on 11/29/2015. They asked for information, contacted the external drive team and gave me some information. Then all contact stopped. After a few months I escalated the case. They asked for the same information again, and there was no contact since then.
There is a workaround in Linux. Unmount the file system,
Because of this bug and the lack of support, I'm beginning to think that it's a bad idea to buy Western Digital external drives. Another thing to consider is that the enclosure controller board encrypts data even when no password has been set. So, you cannot simply take out the drive and access data via a different enclosure or as an internal drive. If the controller board fails you need to replace it with the same model of board, and if the drive fails you will have difficulty recovering data.
The enclosure cuts power to the drive when the computer goes to sleep, the enclosure's sleep timer expires, or USB is unplugged. According to Western Digital support, it sends a standby immediate command before cutting power, which is the right thing to do. The problem seems to be that the enclosure cuts power too soon after the command, without giving the drive enough time to finish unloading the heads. If the drive gets a standby immediate command while the heads are unloaded due to the 8 second idle timer, it will first load and then unload heads, and that takes some time.
An emergency retract is more violent than a normal controlled retract performed while the drive has power. Heads are retracted using power generated from the disk platters' inertia. It wears out the drive more than a normal retract.
It's surprising that this problem exists in the first place. It is a Western Digital enclosure sold with a Western Digital drive inside. Surely they should know how long they need to wait after a standby immediate command before cutting power! The worst problem though is that they're not fixing this. I opened a support request on 11/29/2015. They asked for information, contacted the external drive team and gave me some information. Then all contact stopped. After a few months I escalated the case. They asked for the same information again, and there was no contact since then.
There is a workaround in Linux. Unmount the file system,
sync
, send a standby command with hdparm -y
, wait for the drive to spin down and then unplug USB. (Don't use hdparm -Y
for sleep instead, because the enclosure runs into problems when the drive is in sleep mode.) Once the drive is already in standby, nothing happens when the enclosure sends a standby immediate command and cutting power immediately is okay. I don't know how to accomplish this in Windows though. Because of this bug and the lack of support, I'm beginning to think that it's a bad idea to buy Western Digital external drives. Another thing to consider is that the enclosure controller board encrypts data even when no password has been set. So, you cannot simply take out the drive and access data via a different enclosure or as an internal drive. If the controller board fails you need to replace it with the same model of board, and if the drive fails you will have difficulty recovering data.
Wednesday, May 04, 2016
Converting a primary partition to a logical partition
MBR partitioning only allows for 4 partitions in the MBR itself. One of those can be an extended partition, which then allows for an unlimited number of extended partitions. If you already have 4 partitions in the MBR and no extended partition, and you want to create a partition, then you must convert one of the partitions into an extended partition. This can be done manually:
I wrote the bootloader for chain booting Linux into the first sector of the extended partition /dev/sda4. Make sure you only write the code, which is 400 bytes in that case, and don't overwrite the partition table there. This makes Linux boot if the extended partition is set to be active. Some software doesn't like when an extended partition is active, so this is just an idea, not a recommendation.
- Shrink the partition before the partition being converted. This frees up space for the extended partition table. Shrink by at least 2 megabytes.
- Note the start and end sectors of the partition you are converting, and then delete it.
- Create the extended partition. The start sector needs to be before the start sector of the partition being converted. With Linux fdisk it normally needs to be 2048 sectors (one megabyte) before, but with the
-c=dos
option it can be only 63 sectors before. There is no point in aligning the partition for an SSD, as what matters is alignment of partitions with data inside them. - Create a new logical partition with the exact same start and end sectors of the partition you deleted. Set it to the type the old partition had.
- If the partition was bootable, you may need to fix bootability by reconfiguring the boot manager.
- Grow the partition you shrunk at the beginning, using up all the space up to the extended partition.
set root=(hd0,msdos5)
set prefix=(hd0,msdos5)/boot/grub
insmod normal
normal
I wrote the bootloader for chain booting Linux into the first sector of the extended partition /dev/sda4. Make sure you only write the code, which is 400 bytes in that case, and don't overwrite the partition table there. This makes Linux boot if the extended partition is set to be active. Some software doesn't like when an extended partition is active, so this is just an idea, not a recommendation.
Linux is like a video game
Right now I could say Linux is my main OS. I'm using Linux more than Windows, and I even started using it on my laptop. However, I cannot say that Linux is better in terms of user experience, or technically. The main good things about it are that it uses mostly free software (both in the money and liberty sense) and that it offers a lot of choice.
The desktop OS which people call "Linux" actually consists of a lot of components added on top of the Linux kernel itself. You can choose among many desktop environments, including KDE, GNOME, Xfce, Unity and more. There are even alternatives for various lower level packages. This is nice in terms of how it gives the user a lot of choice, but it leads to a terrible mess in terms of configuration. The same setting is affected by multiple configuration options. If you change desktop environments, you have to re-do some settings. Lower level packages have settings which conflict with desktop environment settings, leading to various results. Setting a lower-level configuration setting may cause the desktop environment to disallow changes, or it may set its own settings when it starts, overriding the lower level setting.
Another issue in Linux is that things change a lot. GNOME made a drastic change with GNOME 3, and KDE made a pretty big change with Plasma 5. I find that desktop environments are best when they intelligently evolve and improve over time. These drastic changes throw away progress and even features, and introduce bugs. Because of them I switched desktop environments several times. I used GNOME 2, then hated GNOME 3 when it came out, tried Xfce for a bit, and then switched to KDE. Then Plasma 5 sucked when it came out, and I switched to Cinnamon. Now I may be switching back to Xfce because it has improved. Lower level things also change. A lot changed when Ubuntu switched to systemd. Various settings and scripts from before needed to be moved and changed. Some packages still install scripts which are ignored since the switch to systemd.
Linux is also quite buggy. Even basic functionality like display of battery level in Cinnamon can be broken. Often there are ways around the problem, but it's still additional work you need to do to get things to work. Linux has excellent driver support in terms of the number of drivers, but bugs can be a problem there also.
In terms of performance, Linux had a much more noticeable improvement when going from 2 GB to 6 GB RAM, and when installing an SSD. It seemed slower than Windows before, and those upgrades helped it catch up. Maybe it is even a bit faster now. The fact it was slower before probably means that Linux is less efficient in terms of memory use and caching, but now that doesn't matter much anymore,
I used to judge Linux harshly based on its shortcomings. The only way to like it is to like the process of fixing these problems and customizing things to improve my experience. Because of the way things change fairly often, I don't feel that I'm learning knowledge that's valuable in a long-term way. It's more like I'm playing a video game designed by the developers. That seems okay for now.
The main factor that's driving me toward Linux is the direction Windows has taken recently. With Windows 8, Microsoft did something similar to GNOME 3. It's was a big change which made the desktop experience worse. Even the Windows 10 previews didn't seem like an improvement over Windows 7. Though I tried a recent preview to explore the Ubuntu in Windows feature, and found that Windows 10 is pretty good. I'm still not sure if it is an improvement in terms of desktop experience compared to Windows 7, but it is okay. There is still the decrease in user freedom and increased sending of data to Microsoft. So, I expect to continue using Linux as my primary OS.
The desktop OS which people call "Linux" actually consists of a lot of components added on top of the Linux kernel itself. You can choose among many desktop environments, including KDE, GNOME, Xfce, Unity and more. There are even alternatives for various lower level packages. This is nice in terms of how it gives the user a lot of choice, but it leads to a terrible mess in terms of configuration. The same setting is affected by multiple configuration options. If you change desktop environments, you have to re-do some settings. Lower level packages have settings which conflict with desktop environment settings, leading to various results. Setting a lower-level configuration setting may cause the desktop environment to disallow changes, or it may set its own settings when it starts, overriding the lower level setting.
Another issue in Linux is that things change a lot. GNOME made a drastic change with GNOME 3, and KDE made a pretty big change with Plasma 5. I find that desktop environments are best when they intelligently evolve and improve over time. These drastic changes throw away progress and even features, and introduce bugs. Because of them I switched desktop environments several times. I used GNOME 2, then hated GNOME 3 when it came out, tried Xfce for a bit, and then switched to KDE. Then Plasma 5 sucked when it came out, and I switched to Cinnamon. Now I may be switching back to Xfce because it has improved. Lower level things also change. A lot changed when Ubuntu switched to systemd. Various settings and scripts from before needed to be moved and changed. Some packages still install scripts which are ignored since the switch to systemd.
Linux is also quite buggy. Even basic functionality like display of battery level in Cinnamon can be broken. Often there are ways around the problem, but it's still additional work you need to do to get things to work. Linux has excellent driver support in terms of the number of drivers, but bugs can be a problem there also.
In terms of performance, Linux had a much more noticeable improvement when going from 2 GB to 6 GB RAM, and when installing an SSD. It seemed slower than Windows before, and those upgrades helped it catch up. Maybe it is even a bit faster now. The fact it was slower before probably means that Linux is less efficient in terms of memory use and caching, but now that doesn't matter much anymore,
I used to judge Linux harshly based on its shortcomings. The only way to like it is to like the process of fixing these problems and customizing things to improve my experience. Because of the way things change fairly often, I don't feel that I'm learning knowledge that's valuable in a long-term way. It's more like I'm playing a video game designed by the developers. That seems okay for now.
The main factor that's driving me toward Linux is the direction Windows has taken recently. With Windows 8, Microsoft did something similar to GNOME 3. It's was a big change which made the desktop experience worse. Even the Windows 10 previews didn't seem like an improvement over Windows 7. Though I tried a recent preview to explore the Ubuntu in Windows feature, and found that Windows 10 is pretty good. I'm still not sure if it is an improvement in terms of desktop experience compared to Windows 7, but it is okay. There is still the decrease in user freedom and increased sending of data to Microsoft. So, I expect to continue using Linux as my primary OS.
Tuesday, March 22, 2016
Cold brewing coffee by agitating all night
Making coffee with boiling or near boiling water vapourizes a lot of the nice smelling volatile compounds (so the room smells nice but you don't get to enjoy them from your cup) and changes other compounds. Overnight in the refrigerator makes very weak coffee (doesn't extract much), but has promising taste. One could probably compensate by using a lot more coffee, but I don't like the inefficiency. So, I tried to extract more by agitating all night.
That is simply a geared electric motor slowly spinning a water bottle with some water and ground coffee inside. The 24V motor is powered by 3V from an adjustable wall wart. It's all put together in a temporary way because this is an experiment. I made it extra rigid because I don't want anything to shake apart as the water sloshes around. The plastic tub underneath is just in case something breaks or leaks.
In terms of darkness and caffeine, the coffee is similar to normal coffee made with hot water. It is less bitter, but it also lacks some other elements of flavour. I guess some aren't very soluble in cold water, and some are probably oils and not water soluble at all. The result isn't bad, but it's not better than my normal way of making coffee in a French press.
I wonder about using other liquids in this setup. Would milk extract more? What about extracting with oil? I wouldn't want to drink that, and it probably wouldn't extract caffeine because it is a water soluble alkaloid, but it might produce strongly coffee flavoured oil which is useful for cooking.
That is simply a geared electric motor slowly spinning a water bottle with some water and ground coffee inside. The 24V motor is powered by 3V from an adjustable wall wart. It's all put together in a temporary way because this is an experiment. I made it extra rigid because I don't want anything to shake apart as the water sloshes around. The plastic tub underneath is just in case something breaks or leaks.
In terms of darkness and caffeine, the coffee is similar to normal coffee made with hot water. It is less bitter, but it also lacks some other elements of flavour. I guess some aren't very soluble in cold water, and some are probably oils and not water soluble at all. The result isn't bad, but it's not better than my normal way of making coffee in a French press.
I wonder about using other liquids in this setup. Would milk extract more? What about extracting with oil? I wouldn't want to drink that, and it probably wouldn't extract caffeine because it is a water soluble alkaloid, but it might produce strongly coffee flavoured oil which is useful for cooking.
Friday, March 18, 2016
Kodi in Raspbian
Raspbian seemed to be the obvious choice for Raspberry Pi 2 Model B operating system. It's customized for running on the Raspberry Pi, contains all the proprietary GPU stuff, is basically Debian, and receives regular updates. Its packages are compiled for earlier versions of the Raspberry Pi and don't take full advantage of ARMv8, but according to most benchmarks that's not a big deal. I installed Raspbian Jessie.
Kodi (formerly known as XBMC) is available as a package in Raspbian and easy to install. However, my first impressions after running it from the X desktop weren't very good. The menus would sometimes get messed up, with the mouse leaving trails. Quitting would often leave me with a black screen. Sometimes it would be possible to switch into a text virtual terminal and back into X, but other times that would be impossible and a reboot would be necessary.
I mostly used the Pi as a low power always on PC. Now I'm giving Kodi another try, for using the Pi as a media player. This time I'm auto-starting it at boot time, without running X. Using the graphical Raspberry Pi configuration utility, I set it to boot to text mode and not auto-login the pi user. Automatic startup of Kodi can be enabled by editing
Unfortunately, this runs into a big problem: after rebooting Kodi ran but couldn't be controlled by the keyboard or mouse. It's a simple problem and there's a simple solution. The
The other big problem was that most HD videos didn't play. There was no error message, though
Kodi now seems to work fine in Raspbian, with no need to install a Kodi centric distribution.
Update: Another problem was the lack of a shutdown option in the Kodi power button dialog. I fixed that by changing PolicyKit settings using instructions from this page:
Kodi (formerly known as XBMC) is available as a package in Raspbian and easy to install. However, my first impressions after running it from the X desktop weren't very good. The menus would sometimes get messed up, with the mouse leaving trails. Quitting would often leave me with a black screen. Sometimes it would be possible to switch into a text virtual terminal and back into X, but other times that would be impossible and a reboot would be necessary.
I mostly used the Pi as a low power always on PC. Now I'm giving Kodi another try, for using the Pi as a media player. This time I'm auto-starting it at boot time, without running X. Using the graphical Raspberry Pi configuration utility, I set it to boot to text mode and not auto-login the pi user. Automatic startup of Kodi can be enabled by editing
/etc/default/kodi
. The kodi
user that it defaults to using is created when Kodi is installed.Unfortunately, this runs into a big problem: after rebooting Kodi ran but couldn't be controlled by the keyboard or mouse. It's a simple problem and there's a simple solution. The
kodi
user doesn't have access to input devices, and needs to be added to the input
group via "sudo addgroup kodi input
". However, that's very inconvenient to accomplish once Kodi is starting automatically and you can't control it, so make sure you do it before rebooting.The other big problem was that most HD videos didn't play. There was no error message, though
ENOSPC
errors could be seen in the log. I guessed this was due to insufficient memory allocated to the GPU. The default is 64 MB, and Kodi recommends 128 MB for the Raspberry Pi 1 and 256 MB for the Raspberry Pi 2. After adding gpu_mem=256
to /boot/config.txt
that was fixed.Kodi now seems to work fine in Raspbian, with no need to install a Kodi centric distribution.
Update: Another problem was the lack of a shutdown option in the Kodi power button dialog. I fixed that by changing PolicyKit settings using instructions from this page:
cat <<EOF | sudo tee /etc/polkit-1/localauthority/50-local.d/custom-actions.pkla
[Actions for kodi user]
Identity=unix-user:kodi
Action=org.freedesktop.upower.*;org.freedesktop.consolekit.system.*;org.freedesktop.udisks.*;org.freedesktop.login1.*
ResultAny=yes
ResultInactive=yes
ResultActive=yes
EOF
Thursday, March 17, 2016
A firmware upgrade fixed Linux USB 3 suspend problems
I recently got a NEC/Renesas uPD720201 based USB 3 card. It worked perfectly in Windows 7 with the 3.0.23.0 driver. It also worked when I booted Ubuntu 15.10 Wily Werewolf, but sometimes failed after suspend to ram (sleep) and sometimes prevented sleep. Upgrading to the latest development version, Ubuntu 16.04 LTS Xenial Xerus, did not fix the problem. Here is what the kernel logged when sleep failed:
xhci_hcd 0000:03:00.0: WARN: xHC save state timeout
suspend_common(): xhci_pci_suspend+0x0/0x70 returns -110
pci_pm_suspend(): hcd_pci_suspend+0x0/0x30 returns -110
dpm_run_callback(): pci_pm_suspend+0x0/0x140 returns -110
PM: Device 0000:03:00.0 failed to suspend async: error -110
PM: Some devices failed to suspend, or early wake event detected
Once this happened all subsequent attempts to suspend failed the same way, and only rebooting might allow suspend to work again.
According to Device Manager in Windows, the USB controller firmware version was 2020. Renesas doesn't offer any official downloads, but 2024 and 2026 are available for download elsewhere. Upgrading to 2026 (sometimes shown as 2.0.2.6) via k2026fwup1.exe (SHA1: 44184f1379c061067ac23ac30055a2b04ddf3940) seems to have fixed the problem. I haven't had any problems with suspend or resume, and the USB 3 ports remain functional after many suspend and resume cycles.
This probably also affects uPD720202, which is the same chip but with only two USB ports. The Renesas uPD720200 chip uses different firmware.
Some people reported problems with the 2026 upgrade procedure. Here someone recommends a 2024 downgrade followed by another 2026 upgrade if there are problems. I uninstalled the USB 3 driver in Windows 7 before the upgrade, thinking stuff that the driver does might interfere. The upgrade was quick and problem-free for me, and after rebooting I re-installed the driver.
BTW Here's a photo of the card:
This probably also affects uPD720202, which is the same chip but with only two USB ports. The Renesas uPD720200 chip uses different firmware.
Some people reported problems with the 2026 upgrade procedure. Here someone recommends a 2024 downgrade followed by another 2026 upgrade if there are problems. I uninstalled the USB 3 driver in Windows 7 before the upgrade, thinking stuff that the driver does might interfere. The upgrade was quick and problem-free for me, and after rebooting I re-installed the driver.
BTW Here's a photo of the card:
Tuesday, March 15, 2016
Partitions can make whole drives inaccessible, as if they failed
After setting up some partitions via
The solution was overwriting the CHS addresses with
This is probably a bug Intel ICH9R BIOS.
Here is the partition which triggered the bug:
fdisk
in Linux and rebooting, the GA-P35-DS3R F13 BIOS hung while detecting the SSD. The Dell Inspiron 6400 laptop didn't hang, but it reported an error when I put in that SSD. It seemed as if the drive failed. Then I booted Linux with the SATA cable unplugged, and plugged in the cable while Linux was running. The drive was detected and it worked perfectly. Overwriting the partitions in the MBR with zeroes caused it to be detected by the BIOS. Of course that's not a solution because I need that partition.The solution was overwriting the CHS addresses with
FE FF FF
bytes. That is the same value used for CHS addresses beyond the 8GB limit, where only LBA can be used. Modern software should always be using LBA anyways, so the CHS shouldn't need to be correct.This is probably a bug Intel ICH9R BIOS.
Here is the partition which triggered the bug:
80 41 02 00 07 00 33 0D 00 10 00 00 00 20 03 00
Tuesday, March 01, 2016
Fixing Prolific PL2303 code 31 error
The code 31 error seems to happen when a PL2303 device tries to create a COM port which already exists. For example, it happens if you already have COM8 and you plug in a PL2303 device which tries to also configure itself to COM8.
COM port numbers are usually defined based on the USB port, so the simplest solution is to plug the device into a different port, and hope it will get a different COM port number.
The other solution is to reassign the port number. You can do this via Device Manager. Go into the Ports (COM & LPT) section, right click on the port you want to change, select Properties, go into the Port Settings tab, and click on the Advanced button. In the bottom of the Advanced Settings dialog you can change the COM port number. The list helpfully shows what numbers have been assigned to other devices, including devices which are not currently plugged in. If this cannot be done due to the code 31 error, either change the conflicting device's port, or unplug the conflicting device, plug in the new device and then change the new device's port.
I don't trust Windows with rare operations like this, because they probably haven't been tested enough. A port may fail to work or you might get a bluescreen. So, I recommend rebooting. Everything should be fine afterwards.
COM port numbers are usually defined based on the USB port, so the simplest solution is to plug the device into a different port, and hope it will get a different COM port number.
The other solution is to reassign the port number. You can do this via Device Manager. Go into the Ports (COM & LPT) section, right click on the port you want to change, select Properties, go into the Port Settings tab, and click on the Advanced button. In the bottom of the Advanced Settings dialog you can change the COM port number. The list helpfully shows what numbers have been assigned to other devices, including devices which are not currently plugged in. If this cannot be done due to the code 31 error, either change the conflicting device's port, or unplug the conflicting device, plug in the new device and then change the new device's port.
I don't trust Windows with rare operations like this, because they probably haven't been tested enough. A port may fail to work or you might get a bluescreen. So, I recommend rebooting. Everything should be fine afterwards.
Monday, February 15, 2016
Is it time to stop using Firefox because it is less secure?
Firefox has been my favourite web browser for a decade. Recently I started using Chrome in Linux because it performs better on most web pages, but there doesn't seem to be a Chrome performance advantage in Windows. Experiences with Chrome show that it is a good browser, but I prefer Firefox because I prefer Mozilla over Google.
Nowadays, Google is the Microsoft of the Internet. They own some of the most popular web sites plus the most popular web browser. Advertising is their main source of revenue, which makes them bad for privacy. Potentially giving so much of my information to Google does not seem good.
It seems like Firefox has lost its way recently. New unpopular features were added, while old features were removed, upsetting some loyal users. There doesn't seem to be much real progress. I hate how it won't be possible to disable extension signing, which means you will need to install a different build if you want to use an extension which wasn't signed via Mozilla. However, all of that was ultimately acceptable.
What really made me stop and think was when Pwn2Own announced that Firefox won't be attacked because it is too easy. Looking at comments on several sites, I didn't really see a valid defense of Firefox. Instead, some people expressed an irrational conviction that Firefox is safe and doesn't need security improvements. Then I looked at statistics on exploits, comparing Chrome to Firefox. In 2015, Chrome had 8 code execution vulnerabilities, and Firefox had 83. Previous years show a similar pattern.
Is using Firefox in Windows unwise because it is less secure? Running it in Linux probably gives you security through obscurity, but I'm already using Chrome in Linux.
Firefox will eventually get sandboxing via Electrolysis, but when? It seems like that has been "coming soon" for a long time. Is waiting for it to be released a good idea?
Nowadays, Google is the Microsoft of the Internet. They own some of the most popular web sites plus the most popular web browser. Advertising is their main source of revenue, which makes them bad for privacy. Potentially giving so much of my information to Google does not seem good.
It seems like Firefox has lost its way recently. New unpopular features were added, while old features were removed, upsetting some loyal users. There doesn't seem to be much real progress. I hate how it won't be possible to disable extension signing, which means you will need to install a different build if you want to use an extension which wasn't signed via Mozilla. However, all of that was ultimately acceptable.
What really made me stop and think was when Pwn2Own announced that Firefox won't be attacked because it is too easy. Looking at comments on several sites, I didn't really see a valid defense of Firefox. Instead, some people expressed an irrational conviction that Firefox is safe and doesn't need security improvements. Then I looked at statistics on exploits, comparing Chrome to Firefox. In 2015, Chrome had 8 code execution vulnerabilities, and Firefox had 83. Previous years show a similar pattern.
Is using Firefox in Windows unwise because it is less secure? Running it in Linux probably gives you security through obscurity, but I'm already using Chrome in Linux.
Firefox will eventually get sandboxing via Electrolysis, but when? It seems like that has been "coming soon" for a long time. Is waiting for it to be released a good idea?
Friday, February 12, 2016
Don't use the Miranda NG Facebook plugin
Since Facebook disabled XMPP (Jabber), the options for connecting to Chat via a third party plugin are limited. Miranda NG, my favourite IM client, has a Facebook protocol which I was using. I had been getting captchas for sending or posting totally innocent links, like even Wikipedia or Slashdot, and even via the web interface. I didn't know what was causing this. Then I switched from the stable to the development Miranda NG version so I could use the SkypeWeb plugin. Soon after that, Miranda NG said my computer is infected and needs to be cleaned. After doing a bit of searching, I found that others had similar problems due to Miranda NG. Since then I intermittently can't send links at all.
It has been a few days since I stopped using the Miranda NG's Facebook protocol, the problem hasn't gotten better, and I'm annoyed. I don't want to use an instant messaging application which has the capability of blocking messages based on content. Much older IM protocols which allow direct connections between clients are so much better. I'm also not happy with their ability to block links in posts.
Basically, Facebook is a piece of shit which I use because others use it. Social networking should function via an Internet standard, not via proprietary web sites. It should be distributed, so you can select or run a server, instead of just logging in to one place.
I'm deactivating Facebook because it's too annoying now. Maybe eventually when I reactivate, it won''t annoy me with captchas and link blocking.
The purple-facebook plugin for Pidgin claims to use the protocol used by the Android Facebook Chat application. That could work better and not trigger captchas. Pidgin is worse than Miranda, lacking in features and flexibility. It's okay though, and I may start using in in Windows because of this.
It has been a few days since I stopped using the Miranda NG's Facebook protocol, the problem hasn't gotten better, and I'm annoyed. I don't want to use an instant messaging application which has the capability of blocking messages based on content. Much older IM protocols which allow direct connections between clients are so much better. I'm also not happy with their ability to block links in posts.
Basically, Facebook is a piece of shit which I use because others use it. Social networking should function via an Internet standard, not via proprietary web sites. It should be distributed, so you can select or run a server, instead of just logging in to one place.
I'm deactivating Facebook because it's too annoying now. Maybe eventually when I reactivate, it won''t annoy me with captchas and link blocking.
The purple-facebook plugin for Pidgin claims to use the protocol used by the Android Facebook Chat application. That could work better and not trigger captchas. Pidgin is worse than Miranda, lacking in features and flexibility. It's okay though, and I may start using in in Windows because of this.
Wednesday, February 03, 2016
Testing a 10X macro lens filter
The Olympus C-770 Ultra Zoom camera already has good macro capability, so there is no need for anything to help with that. However, experiments with a magnifying glass showed an interesting possibility for macro photography with zoom, and the 10X filter was very inexpensive at dx.com after the 2015 holiday $3 off $6 gift card.
Here's an nRF24L01+ module with C-770 super macro mode, without the macro lens filter. This is as close as I could get.
Taking these kinds of pictures required manual focus. Both the depth of field and the range of focus adjustment are very limited. It's easiest to set a specific focus, and then move the camera to optimize sharpness. The camera's focus adjustment would probably come in handy for focus stacking if it was mounted securely.
For best magnification, I set the focus to the closest possible position. Due to the 10x macro lens filter, the actual camera distance is nice. It's not too close, allowing for good lighting, and not too far either.
I got the best sharpness at f/8. It seemed to minimize the hazy blurring seen in the above photos. It required good lighting to prevent blur from camera shake, but that's easy when the camera doesn't need to be too close to the subject.
With a more open aperture, there was an increase in blur when I half-pressed the shutter button, and f/8 decreased that. I nevertheless made my final focus adjustments with the shutter button half pressed.
This is a vacuum fluorescent display (VFD). You can see the cathode filaments which heat up and emit electrons, control grids used for multiplexing, and the anodes below them, which light up when electrons hit them.
This is an MC68705P3S microcontroller. The chip has a window because the program is stored in EPROM, which is erased by ultra-violet light.
Here's an nRF24L01+ module with C-770 super macro mode, without the macro lens filter. This is as close as I could get.
The 10x Macro lens filter allowed me to get a bit closer. It's a very slight improvement in macro capability, with a corresponding decrease in depth of field. Image quality degradation from the filter is minimal.
Using maximum zoom, it's possible to get even more magnification. There is significant image quality degradation, but you still get to see a lot more detail. You can see individual strokes of the laser engraving of the frequency on the crystal.Taking these kinds of pictures required manual focus. Both the depth of field and the range of focus adjustment are very limited. It's easiest to set a specific focus, and then move the camera to optimize sharpness. The camera's focus adjustment would probably come in handy for focus stacking if it was mounted securely.
For best magnification, I set the focus to the closest possible position. Due to the 10x macro lens filter, the actual camera distance is nice. It's not too close, allowing for good lighting, and not too far either.
I got the best sharpness at f/8. It seemed to minimize the hazy blurring seen in the above photos. It required good lighting to prevent blur from camera shake, but that's easy when the camera doesn't need to be too close to the subject.
With a more open aperture, there was an increase in blur when I half-pressed the shutter button, and f/8 decreased that. I nevertheless made my final focus adjustments with the shutter button half pressed.
This is a vacuum fluorescent display (VFD). You can see the cathode filaments which heat up and emit electrons, control grids used for multiplexing, and the anodes below them, which light up when electrons hit them.
This is an MC68705P3S microcontroller. The chip has a window because the program is stored in EPROM, which is erased by ultra-violet light.
Subscribe to:
Posts (Atom)