ESXi Embedded Host Client Overview

As I have begun to see numerous rumors in the VMware forums that the next major release will deprecate the usage of the vSphere thick client (and the simple fact that VMs created using the most recent extensions include features that cannot be managed with the desktop client) I decided to take the plunge and install the HTML5 fling host client on my ESXi host.
The fling can be downloaded from the VMware Labs site. The standard caveat applies to this fling like anything else that you install from VMware Labs:

I also understand that Flings are experimental and should not be run on production systems.


There are several ways to install it, but the recommended method is to use the esxcli command from the console of the ESXi host.  Since I have SSH connectivity allowed to my ESXi host, this is the method that I chose.  You can also remove is via the esxcli command from the console as well.

Logging In

After the installation has completed you can access the login screen by visiting the ESXi host welcome page or by appending “/ui” to the end of the FQDN (or IP) of your ESXi host.  If you navigate to the ESXi host welcome page, you will see an additional link “Open the VMware Host Client” that has been added to the screen above the paragraph describing the functionality of VMware vCenter.  vCenter also has a new HTML5 fling, but I as I am running the “free” version of ESXi, I don’t have access to vCenter.

Host Management and Monitoring

The initial view after logging in is of the Host Management functions.  From here you have easy access to create or register VMs, shutdown or restart the host, and additional functions which are in the actions dropdown menu.  As you can see from the following screenshots, the HTML5 interface gives you access to a plethora of management and monitoring screens where you can manage or view the various settings and performance metrics of the ESXi host.

VM Management

Concise VM List

The system also gives you complete control of the management and monitoring of the configured virtual machines, of course.  The main VM management screen lists the configured VMs and gives you the ability to create new VMs as well as easy access to the following functions for any VM you select in the list:

  • console
  • power on, power off, suspend

When you select a VM the bottom pane changes to present a summary screen showing a preview of the current console, as well as basic hardware stats on the VM.  When you select a single VM from the list you are also given access to a menu showing the various actions that you can perform on the VM.

Individual VM

When you select a VM from the listing you are presented with a more complete picture of the settings, performance and configuration of the VM.  You are also presented with any notifications regarding the state of the VM (mismatched OS, VMware Tools state, etc.).  You also have the ability to view a series of monitoring graphs similar to what is available for the overall ESXi host.

Storage Management

Selecting Storage from the left column presents a view of the storage subsystem of the ESXi host.  From here you can manage the datastores (the default view), as well as the storage adapters and devices.  Selecting a single item in the datastores list is similar to the way the listing of VMs works, in that you are presented with a summary of the single datastore.  Clicking on the hyperlink to a single datastore opens a view that is specific to the single datastore, confining the information and actions to that datastore.  The monitoring screen for a single datastore is similar to the event listing of the overall system, however it is restricted to events involving only the datastore.

Network Management

The features of the networking subsystem management screen has more in common with the monitoring section of the main ESXi host.  You are presented with a set of tabs from which you can manage the following:

  • port groups
  • vswitches
  • physical and vmkernel NICs
  • TCP/IP stacks
  • firewall rules


While I am most familiar with the “thick” client and the flash-based interface of vCenter, I have found the HTML5 fling replacement for the standalone client to be quite functional.  I am still working on changing the methodology for some functions that I am used to in the standalone client.  I look forward to the improvements to the web client that are on the way.

Image of laptop with hand holding a skeleton key extending outwards through the display.

Configure Google Authenticator on CentOS 7


As part of the rebuild on my Plex Media Server using CentOS 7, I had intended to configure Google Authenticator but hadn’t gotten around to doing it yet.  As I got into the process recently I discovered that many of the steps that I had used when configuring my CentOS 6 Digital Ocean droplet were out of date to the point of uselessness.

I also discovered that most of the guides that I found either relied on the older 1.0 code release which was also outdated or used a unknown RPM repo.  As such I decided to write up the process that I followed to use the code downloaded from the official GitHub repository.

NOTE: .  If you are doing this in an enterprise setting, it is likely that your company has particular settings and restrictions that you may need to adhere to (e.g., not running things as the root user). Also, please note that all of my examples use the CentOS defaults unless specifically noted.

Install the pre-requisites

As I have started with a minimal CentOS 7 install (since I don’t have any need for a GUI or the other extras) many of the packages I needed were not part of the baseline install set.  Here is the list of additional packages that I installed (note that this will get quite a few items to satisfy numerous dependencies)

  • autoconf
  • automake
  • bind-utils
  • gcc
  • libtool
  • make
  • nmap-netcat
  • ntp
  • pam-devel
  • unzip
  • wget
[root@server ~]# yum -y install autoconf automake bind-utils gcc libtool make nmap-netcat ntp pam-devel unzip wget



Configure and test NTP

An essential part of the 2FA system is an accurate clock.  This is because at it’s heart the Google Authenticator system is a Time-based One-time Password Algorithm (TOTP).  If you have too much skew between the clock on the server and the clock on the client, then your codes will fail intermittently.

Test NTP pool DNS resolution

First you need make sure that the name used by the NTP configuration file for the server will resolve to an IP address.

CentOS 7 uses the following as the NTP server pool set:

[root@server ~]# nslookup

Non-authoritative answer:

Test Network connectivity

Next up you should test that you actually have baseline connectivity to the NTP endpoint.  In general this should just work, like the DNS resolution did, however if you are running your server behind an IPS or a network firewall, it is possible that you will need to explicitly allow outbound NTP connections over port 53/UDP.

Please note that in the following code snippet, a result of zero (0) indicates a successful connection, whereas a result of one (1) indicates a failure to connect to the endpoint.

[root@server ~]# echo | nc -u -w1 53 >/dev/null 2>&1 ;echo $?

Test basic NTP functionality

Now that you have tested (and resolved any issues you found) your DNS resolution and baseline connectivity to the NTP endpoint you need to perform an actual application layer test.

[root@server ~]# ntpdate -q
server, stratum 2, offset -0.006191, delay 0.10498
server, stratum 2, offset -0.001065, delay 0.11761
server, stratum 3, offset -0.005018, delay 0.06509
server, stratum 2, offset -0.000588, delay 0.09003
28 Aug 17:07:05 ntpdate[16085]: adjust time server offset -0.000588 sec

Configure NTP daemon for startup

If you get a complete test, the next step will be to configure the NTP daemon to startup at system boot and to start the service.

[root@server ~]# systemctl enable ntpd
Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/ntpd.service.
[root@server ~]# systemctl start ntpd
[root@server ~]# systemctl status ntpd
● ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2016-08-28 17:08:33 EDT; 2h 34min ago
Process: 16150 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 16151 (ntpd)
CGroup: /system.slice/ntpd.service
└─16151 /usr/sbin/ntpd -u ntp:ntp -g

Aug 28 17:08:33 server ntpd[16151]: Listen normally on 2 lo UDP 123
Aug 28 17:08:33 server ntpd[16151]: Listen normally on 3 ens192 UDP 123
Aug 28 17:08:33 server ntpd[16151]: Listen normally on 4 ens192 fe80::20c:29ff:fe7e:af2f UDP 123
Aug 28 17:08:33 server ntpd[16151]: Listen normally on 5 lo ::1 UDP 123
Aug 28 17:08:33 server ntpd[16151]: Listen normally on 6 ens192 2601:901:8001:a070:20c:29ff:fe7e:af2f UDP 123
Aug 28 17:08:33 server ntpd[16151]: Listening on routing socket on fd #23 for interface updates
Aug 28 17:08:33 server systemd[1]: Started Network Time Service.
Aug 28 17:08:33 server ntpd[16151]: c016 06 restart
Aug 28 17:08:33 server ntpd[16151]: c012 02 freq_set kernel -115.767 PPM
Aug 28 17:08:34 server ntpd[16151]: c615 05 clock_sync

Build and Install Google Authenticator

Download the codebase

As I stated in the intro, many of the guides used an outdated codebase from the now defunct Google Code Project Hosting service or they used an RPM repository that I didn’t find completely trustworthy.  To resolve those problems, I decided to just rely on the GitHub repository that is maintained by the Google developer team.

The first step is to download the code.  The GitHub repository for the PAM module is located at

[root@server ~]# cd /opt
[root@server opt]# wget -q

Compile and install

NOTE: as part of the build and install process, I have included a directive to change the base directory that is used for the installation.  The reason for this is due to the need to create two symbolic links.  This will be covered in the next section.

The following are the steps as laid out in the PAM Module Instructions from the wiki:

make install

To anyone that has compiled from source, these are likely to be sufficient, however I have included an abbreviated version of my code snippet for this since it may provide additional information for those that aren’t quite so familiar with the process.

[root@server ~]# cd /opt/google-authenticator-master/libpam/
[root@server libpam]# ./
libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, `build'.
libtoolize: copying file `build/'
libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `build'.
libtoolize: copying file `build/libtool.m4'
libtoolize: copying file `build/ltoptions.m4'
libtoolize: copying file `build/ltsugar.m4'
libtoolize: copying file `build/ltversion.m4'
libtoolize: copying file `build/lt~obsolete.m4' installing 'build/config.guess' installing 'build/config.sub' installing 'build/install-sh' installing 'build/missing' installing 'build/depcomp'
parallel-tests: installing 'build/test-driver'
[root@server libpam]# ./configure --prefix=/usr
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...


  google-authenticator version 1.01
  Prefix.........: /usr
  Debug Build....:
  C Compiler.....: gcc -std=gnu99 -g -O2
  Linker.........: /bin/ld -m elf_x86_64  -ldl
[root@server libpam]# make
make  all-am
make[1]: Entering directory `/opt/google-authenticator-master/libpam'


/bin/sh ./libtool  --tag=CC   --mode=link gcc -std=gnu99  -g -O2   -o google-authenticator src/google-authenticator.o src/base32.o src/hmac.o src/sha1.o  -ldl
libtool: link: gcc -std=gnu99 -g -O2 -o google-authenticator src/google-authenticator.o src/base32.o src/hmac.o src/sha1.o  -ldl
make[1]: Leaving directory `/opt/google-authenticator-master/libpam'
[root@server libpam]# make install
make[1]: Entering directory `/opt/google-authenticator-master/libpam'
  /bin/mkdir -p '/usr/bin'
  /bin/sh ./libtool   --mode=install /bin/install -c google-authenticator '/usr/bin'
libtool: install: /bin/install -c google-authenticator /usr/bin/google-authenticator
  /bin/mkdir -p '/usr/share/doc/google-authenticator'
  /bin/install -c -m 644 FILEFORMAT '/usr/share/doc/google-authenticator'
  /bin/mkdir -p '/usr/share/doc/google-authenticator'
  /bin/install -c -m 644 totp.html '/usr/share/doc/google-authenticator'
  /bin/mkdir -p '/usr/lib/security'
  /bin/sh ./libtool   --mode=install /bin/install -c '/usr/lib/security'
libtool: install: /bin/install -c .libs/ /usr/lib/security/
libtool: install: /bin/install -c .libs/pam_google_authenticator.lai /usr/lib/security/
libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/sbin" ldconfig -n /usr/lib/security
Libraries have been installed in:

If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
  - add LIBDIR to the `LD_LIBRARY_PATH' environment variable
    during execution
  - add LIBDIR to the `LD_RUN_PATH' environment variable
    during linking
  - use the `-Wl,-rpath -Wl,LIBDIR' linker flag
  - have your system administrator add LIBDIR to `/etc/'

See any operating system documentation about shared libraries for
more information, such as the ld(1) and manual pages.
make[1]: Leaving directory `/opt/google-authenticator-master/libpam'

Configure PAM and SSH

The next two subsections cover the creation of two symbolic links (mentioned in the note in the previous section) and the change to the configuration file for the PAM system governing the SSH daemon.  Additionally, there are a couple of modifications that you might need to perform on the SSH daemon configuration as well, depending on your setup.

Add symlinks to 64bit libraries and module

During my installation, I ran into a situation that was caused by the location used for the installation of the PAM library and object files.  I solved this in two steps:

  1. Change the base location of the file installation
  2. Addition of two symbolic links

The first step was covered in the build process by the addition of a parameter to the call to the configure script:

[root@server libpam]# ./configure --prefix=/usr

The second step was accomplished using the following commands:

[root@server ~]# cd /usr/lib64/security/
[root@server security]# ln -s /usr/lib/security/
[root@server security]# ln -s /usr/lib/security/

As you can see from the following directory listing, this results in two symbolic links pointing back to the installed files in /usr/lib/security.

[root@server security]# pwd
[root@server security]# ls -l `find . -maxdepth 1 -type l -print`
lrwxrwxrwx  1 root root 45 Aug 28 17:58 ./ -> /usr/lib/security/
lrwxrwxrwx  1 root root 45 Aug 28 17:58 ./ -> /usr/lib/security/
lrwxrwxrwx. 1 root root 15 Jan 17  2016 ./ ->
lrwxrwxrwx. 1 root root 11 Jan 17  2016 ./ ->
lrwxrwxrwx. 1 root root 11 Jan 17  2016 ./ ->
lrwxrwxrwx. 1 root root 11 Jan 17  2016 ./ ->
lrwxrwxrwx. 1 root root 11 Jan 17  2016 ./ ->

Update PAM config file for SSH

After you have the files installed in the correct locations, you need to update the PAM configuration for SSH.  Now, there are a number of ways that you can go about doing this.  I chose to use an inline edit with sed.

NOTE: editing PAM configurations can result in denying access to your system if performed incorrectly.  When editing the SSH PAM configuration file, always do the following:

  1. BACKUP the original file
  2. Make sure you have console access
  3. Open a secondary SSH terminal so that if you screw up you are still logged in
[root@server ~]# cd /etc/pam.d/
[root@server pam.d]# cp -p sshd sshd_backup
[root@server pam.d]# sed -i "2iauth       required" sshd

If you would like to use HOTP (counter based) instead of TOTP add the following the end of the text to be inserted using the sed command in the above command example:


Update the SSH configuration

In order to make the SSH part work there are three configuration settings that have to be set in a particular way:

  • PasswordAuthentication
  • UsePAM
  • ChallengeResponseAuthentication

The configuration file is located here:


The desired settings for these are as follows:

PasswordAuthentication yes
UsePAM yes
ChallengeResponseAuthentication yes

The first two are likely already set to be to the desired value, but the third one will need to be changed from “no” to “yes” on a default CentOS 7 install.  To check your settings run the following command and the edit the configuration file if needed:

[root@server ~]# egrep "PasswordAuthentication|UsePAM|ChallengeResponseAuthentication" /etc/ssh/sshd_config | egrep -v "#"
PasswordAuthentication yes
ChallengeResponseAuthentication yes
UsePAM yes

Next you need to restart the SSH daemon so that the changes take effect.  Warning: until you have configured the userland piece (see the next section) your SSH users won’t be able to login. (This includes the root user, but you aren’t logging in remotely as root anyway right?)

Configure userland

This step involves configuring the users to use a TOTP token by running the google-authenticator binary and then answering some questions.  Here is the usage statement for the binary:

google-authenticator [<options>]
 -h, --help Print this message
 -c, --counter-based Set up counter-based (HOTP) verification
 -t, --time-based Set up time-based (TOTP) verification
 -d, --disallow-reuse Disallow reuse of previously used TOTP tokens
 -D, --allow-reuse Allow reuse of previously used TOTP tokens
 -f, --force Write file without first confirming with user
 -l, --label=<label> Override the default label in "otpauth://" URL
 -i, --issuer=<issuer> Override the default issuer in "otpauth://" URL
 -q, --quiet Quiet mode
 -Q, --qr-mode={NONE,ANSI,UTF8}
 -r, --rate-limit=N Limit logins to N per every M seconds
 -R, --rate-time=M Limit logins to N per every M seconds
 -u, --no-rate-limit Disable rate-limiting
 -s, --secret=<file> Specify a non-standard file location
 -S, --step-size=S Set interval between token refreshes
 -w, --window-size=W Set window of concurrently valid codes
 -W, --minimal-window Disable window of concurrently valid codes

The following snippet shows the command output and answers of run of the google-authenticator binary for a regular user (NOTE: I have removed the actual emergency codes and the secret key, also if your terminal will display it, you may be presented with a QR code to scan.):

[root@server ~]# su - johndoe
[johndoe@server ~]$ google-authenticator

Do you want authentication tokens to be time-based (y/n) y
Your new secret key is: SECRET_KEY_HERE
Your verification code is VERIFY_CODE_HERE
Your emergency scratch codes are:

Do you want me to update your "/home/johndoe/.google_authenticator" file? (y/n) y

Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y

By default, tokens are good for 30 seconds. In order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with
poor time synchronization, you can increase the window from its default
size of +-1min (window size of 3) to about +-4min (window size of
17 acceptable tokens).
Do you want to do so? (y/n) y

If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y

If you would like to just provide a single command for your users to run to avoid having to tell them the answers as laid out in the above example, use the following:

google-authenticator --time-based --disallow-reuse --window-size=17 --rate-limit=3 --rate-time=30 --force

Extra bits

The following sections detail two useful options.

Whitelist a network

It is possible that you may have a particular network segment, that by it’s very nature is secure enough to not require the additional steps of 2FA.  If this is the case for your installation, here is how you can whitelist a network segment.

  1. Create access control list
  2. Update /etc/pam.d/sshd to use to ACL

The following shows the contents of an ACL that would whitelist the network

[root@server ~]# vi /etc/security/2fa-acl.conf
[root@server ~]# cat /etc/security/2fa-acl.conf
# Allows certain network segments to skip 2FA
+ : ALL :
- : ALL : ALL

Once you have the ACL built add an exclusionary conditional to the SSH PAM configuration file BEFORE the line that was added to activate the module.  For example:

[root@server ~]# sed -i "2iauth [success=1 default=ignore] accessfile=/etc/security/2fa-acl.conf" /etc/pam.d/sshd

Allow unconfigured users

If you have a single user or two it will be easy enough to manage providing the string activate the google-authenticator system for a user, but if you are managing a server with a large number of users, you might want to establish a grace period during which users that have not yet configured their local account to continue to login.

This can be accomplished by adding this parameter to the end of the PAM configuration file line:


If you have already added the line and you just want to easily add the new parameter, then use the following sed command:

[root@server ~]# sed -i '2s/$/ nullok/' /etc/pam.d/sshd


If you encounter login failures during your testing, a good place to start is the content of the following log files:

  • /var/log/secure
  • /var/log/audit/audit.log

Switch your domain registrar to Hover

wallet_smallThere is an adage that you should vote with your wallet when you are unhappy with a particular business or their practices.  Do we do this as much as we should?  Probably not.

Taking this to heart,  I have decided that I will try to apply this to the usage of my electronics and gadget stuff.  So the first thing I chose to do was to switch my domain registration from using NameCheap to using Hover.  There were several usage reasons, like support for the TOTP/HOTP system for two-factor authentication instead of relying on SMS messages.

But there are some other reasons, like the support for causes that I feel are worthwhile.  Here are just a few that Hover is a patron of:

So if you own a domain or two, or you work somewhere that does, think about switching your registration to Hover and help support not only your account’s security but also some great organizations.

Gratuitous Recommendation Link:

Full disclosure: I work at and they are the parent company of and, two really large domain registration companies, but I still prefer Hover.

RHEL7 and ncat changes

One of the tools that I use on a regular basis to test network connectivity updates is the “z” option of netcat.  Apparently when RedHat rolled out the latest version of their distribution of RedHat Enterprise Linux (RHEL) they decided to move to using the nmap-ncat package instead of the nc package.  The command options a very different.

So when attempting to test single port like I would have under previous releases I now use the following syntax:

# echo | nc -w1 $host $port >/dev/null 2>&1 ;echo $?

If the result that is returned is a zero then you have successfully connected to the remote host on the desired port. This also applies to CentOS 7 since it is a “clone” or copyleft port of the RHEL7 binaries.

Dear Comcast: You Suck

Editorial Note: Apparently Comcast really would prefer that people not use the term data cap when referring to the limitations being placed on their customers data usage and would much rather prefer that we use the term data usage plan or data threshold, however, I don’t really care. 🙂

Dear Comcast,

I would like to go on record as saying that you suck.  I recognize that you are a for profit company and that you would like to make a profit on the services that you provide.  I even think that having your company make a profit is a good thing because that enables you to pay your employees so that they can put food on their tables and afford to pay the fees for their children to participate in Little League baseball and other such childhood activities.

Your data usage cap system is bogus.  According to the data available on your own website you have eight (8) different trial markets where you have rolled out data caps since 2012:

  • August 1, 2012: Nashville, Tennessee – 300GB cap
  • October 1, 2012: Tucson, Arizona – 3 tiers (300GB, 350GB, 600GB)
  • August 22, 2013: Fresno, California – Economy Plus option added
  • September 1, 2013: Savannah, Georgia; Central Kentucky; Jackson, Mississippi – 300GB
  • October 1, 2013: Mobile, Alabama; Knoxville, Tennessee.
  • November 1, 2013: Huntsville, Alabama; Augusta, Georgia; Tupelo, Mississippi; Charleston, South Carolina; Memphis, Tennessee – 300GB cap
  • December 1, 2013: Atlanta, Georgia; Maine – 300GB cap
  • October 1, 2015: Fort Lauderdale, the Keys and Miami, Florida – 300GB cap plus $30 option for increasing to  unlimited

In fact, your page on this even refers to these as “trial start dates” which to a reasonably minded person would imply that they have an end date as well, however to the best of my knowledge (as well as the comments made by a customer support representative) there is no plan to end these trials OR any plan to actually collapse them into a single cohesive plan that applies to your entire service.

Now before I get into the real meat of my complaint with your pricing plans, let me go on record as saying that I have no real problem with a metered usage system for data bandwidth.  I pay for a metered system with electricity usage, as do most consumers (unless maybe you can afford to own a skyscraper with your name on it).  If my usage of power goes up, then so does my bill.  If my usage goes down, then so does my bill.  However, my power bill is not likely to increase at that rate my cable bill will for my bandwidth usage.

The problem I have with my data cap (or as you would say data usage plan) is that I have no choice in plans.  If all you get is one option then quit calling it a choice.  If you want to call it a choice, then give me one!  I am fine with paying an additional $30 a month for the option to not have a cap.  Will I use less than my current 300GB cap some months thus giving you extra money in your pocket? Sure I will, but I am ok with that.  I currently pay for 20GB of wireless data on my cell phone plan and most months I don’t even hit 10GB of usage, but I am happy to pay for the extra every month so that on the months that I am travelling and need to use my phone as a hotspot that I won’t suddenly find that I have an additional hit of $80-$100 on my bill due to that.  Give me the option to pay for a plan that will allow me to meet my usage needs at the high end and budget for that and I will happily pay it.

Oh and the idea that 300GB of data is actually a reasonable place to start your cap is laughable.  With more and more consumers, even in the rural South where I live, moving to services like Netflix and Hulu for media consumption, your insistence that 300GB is a good median limit is just making your service ripe for competition.  Take a look at places where there is actual competition and you will see what I am talking about (of course the fact that Google and AT&T apparently don’t care about consumers living outside of a rural area puts the lie to their claim of offering competition).

On October 1, 2015, you flipped the billing switch that allows for customers in three markets in Florida to pay $30 more an have no data cap.  Why not just flip that switch for the whole country?  Better yet, why not just up my bill by $30 and remove the cap completely?  Want to just switch to a completely metered plan?  Fine, then do it, but while you are at it make the price per GB reasonable.  A recent summer 2015 survey from the Georgia PSC showed that I paid roughly 11.7 cents per kwh to my rural electric municipal coporation, Satilla REMC for a 1000 kWh block and I think that they probably have a higher cost per kwh than you do per gigabyte.  If I breakdown my cable bill, I pay roughly 25 cents per gigabyte for my data.  When I exceed my data cap, I get charged an overage fee of $10 for every 50GB chunk, which means that whether I use 1GB extra or 50GB extra, I get charge the extra $10.  That breaks down to 20 cents per gigabyte.  If you can charge me $.20 for the overage data, then why not charge me $.20 for the original 300GB?  So instead of my data portion costing me $76 it would cost $60.  And while we all know that I am being overcharged per gigabyte, let’s just be honest and up front and try not to gouge the only customers you have.  For example, a 2011 article in the Globe and Mail talked about this very issue in Canada and determined that while Bell was selling data to wholesale customers at $4.25 per 40GB block (that’s Canadian dollars and it breaks down to approximately $0.10 Canadian per gigabyte), yet the same service was costing consumers $1 or more per gigabyte.  I haven’t seen numbers for the costs in the US per gigabyte, but I am willing to bet that it’s not that much more.

So why don’t you do everyone a favor treat all your customers the same?  Quit dithering around on the usage cap terms and give us, the consumers that you claim to care about, actual choice in data plans.  It’s a crazy thing, but when you start treating your customers like people that are worth something then it’s just possible that you might not be vilified in the press every day.

And while we are at it, thanks oh so much for silently increasing my download rate from 50Mbps to 75Mbps.  I am sure that at some point in the future you will just up my rate to make up for the speed increase without actually changing my data cap. So yeah, thanks a lot for that.

And not that it matters, but yeah, that FCC complaint?  That was me.


Andrew Fore

Massive Numbers of Chrome Helper Messages in system logs

Today when attempting to figure out why Google Hangouts would not start on my Mac after the application was re-enabled due to a permissions change, I noticed a large number of messages like the following:

6/10/15 10:20:14.000 AM kernel[0]: Google Chrome He (map: 0xffffff804da160f0) triggered DYLD shared region unnest for map: 0xffffff804da160f0, region 0x7fff99a00000->0x7fff99c00000. While not abnormal for debuggers, this increases system memory footprint until the target exits.

After some research I found that this is a reported issue in the bug tracker for Chromium.  At first I thought that maybe this was the cause of the problem I was having but that turned out to not be the case, simply removing the Hangouts app in Chrome and re-adding it fixed my issue.  However, the sheer number of these errors makes the log a bit unwieldy.  It turns out that there is a way to hide all these messages (thanks to the commenter in the Chromium bug thread!):

sudo sysctl -w vm.shared_region_unnest_logging=0

While it doesn’t help at all with Chrome’s memory issues or other UI issues on Mac OS X, it is rather nice to hide all those spurious messages from the system log.

Checking your password expiration date

While logging into one of the Linux jump boxes at work today, it occurred to me that while I recently got notified about my password expiration from our Active Directory farm, I had no idea when my Linux password would expire or what the password life was.

To find out this information you can easily use the chage command.

Here is what the output looks like:

[user@myserver ~]$ chage -l user
Last password change : Apr 09, 2015
Password expires : Jul 08, 2015
Password inactive : never
Account expires : never
Minimum number of days between password change : 1
Maximum number of days between password change : 90
Number of days of warning before password expires : 7

It may seem like such a simple thing to do, but knowing when your password expires can be a lifesaver on occasion.