RHEL7 and ncat changes

One of the tools that I use on a regular basis to test network connectivity updates is the “z” option of netcat.  Apparently when RedHat rolled out the latest version of their distribution of RedHat Enterprise Linux (RHEL) they decided to move to using the nmap-ncat package instead of the nc package.  The command options a very different.

So when attempting to test single port like I would have under previous releases I now use the following syntax:

# echo | nc -w1 $host $port >/dev/null 2>&1 ;echo $?

If the result that is returned is a zero then you have successfully connected to the remote host on the desired port. This also applies to CentOS 7 since it is a “clone” or copyleft port of the RHEL7 binaries.

Dear Comcast: You Suck

Editorial Note: Apparently Comcast really would prefer that people not use the term data cap when referring to the limitations being placed on their customers data usage and would much rather prefer that we use the term data usage plan or data threshold, however, I don’t really care. :-)

Dear Comcast,

I would like to go on record as saying that you suck.  I recognize that you are a for profit company and that you would like to make a profit on the services that you provide.  I even think that having your company make a profit is a good thing because that enables you to pay your employees so that they can put food on their tables and afford to pay the fees for their children to participate in Little League baseball and other such childhood activities.

Your data usage cap system is bogus.  According to the data available on your own website you have eight (8) different trial markets where you have rolled out data caps since 2012:

  • August 1, 2012: Nashville, Tennessee – 300GB cap
  • October 1, 2012: Tucson, Arizona – 3 tiers (300GB, 350GB, 600GB)
  • August 22, 2013: Fresno, California – Economy Plus option added
  • September 1, 2013: Savannah, Georgia; Central Kentucky; Jackson, Mississippi – 300GB
  • October 1, 2013: Mobile, Alabama; Knoxville, Tennessee.
  • November 1, 2013: Huntsville, Alabama; Augusta, Georgia; Tupelo, Mississippi; Charleston, South Carolina; Memphis, Tennessee – 300GB cap
  • December 1, 2013: Atlanta, Georgia; Maine – 300GB cap
  • October 1, 2015: Fort Lauderdale, the Keys and Miami, Florida – 300GB cap plus $30 option for increasing to  unlimited

In fact, your page on this even refers to these as “trial start dates” which to a reasonably minded person would imply that they have an end date as well, however to the best of my knowledge (as well as the comments made by a customer support representative) there is no plan to end these trials OR any plan to actually collapse them into a single cohesive plan that applies to your entire service.

Now before I get into the real meat of my complaint with your pricing plans, let me go on record as saying that I have no real problem with a metered usage system for data bandwidth.  I pay for a metered system with electricity usage, as do most consumers (unless maybe you can afford to own a skyscraper with your name on it).  If my usage of power goes up, then so does my bill.  If my usage goes down, then so does my bill.  However, my power bill is not likely to increase at that rate my cable bill will for my bandwidth usage.

The problem I have with my data cap (or as you would say data usage plan) is that I have no choice in plans.  If all you get is one option then quit calling it a choice.  If you want to call it a choice, then give me one!  I am fine with paying an additional $30 a month for the option to not have a cap.  Will I use less than my current 300GB cap some months thus giving you extra money in your pocket? Sure I will, but I am ok with that.  I currently pay for 20GB of wireless data on my cell phone plan and most months I don’t even hit 10GB of usage, but I am happy to pay for the extra every month so that on the months that I am travelling and need to use my phone as a hotspot that I won’t suddenly find that I have an additional hit of $80-$100 on my bill due to that.  Give me the option to pay for a plan that will allow me to meet my usage needs at the high end and budget for that and I will happily pay it.

Oh and the idea that 300GB of data is actually a reasonable place to start your cap is laughable.  With more and more consumers, even in the rural South where I live, moving to services like Netflix and Hulu for media consumption, your insistence that 300GB is a good median limit is just making your service ripe for competition.  Take a look at places where there is actual competition and you will see what I am talking about (of course the fact that Google and AT&T apparently don’t care about consumers living outside of a rural area puts the lie to their claim of offering competition).

On October 1, 2015, you flipped the billing switch that allows for customers in three markets in Florida to pay $30 more an have no data cap.  Why not just flip that switch for the whole country?  Better yet, why not just up my bill by $30 and remove the cap completely?  Want to just switch to a completely metered plan?  Fine, then do it, but while you are at it make the price per GB reasonable.  A recent summer 2015 survey from the Georgia PSC showed that I paid roughly 11.7 cents per kwh to my rural electric municipal coporation, Satilla REMC for a 1000 kWh block and I think that they probably have a higher cost per kwh than you do per gigabyte.  If I breakdown my cable bill, I pay roughly 25 cents per gigabyte for my data.  When I exceed my data cap, I get charged an overage fee of $10 for every 50GB chunk, which means that whether I use 1GB extra or 50GB extra, I get charge the extra $10.  That breaks down to 20 cents per gigabyte.  If you can charge me $.20 for the overage data, then why not charge me $.20 for the original 300GB?  So instead of my data portion costing me $76 it would cost $60.  And while we all know that I am being overcharged per gigabyte, let’s just be honest and up front and try not to gouge the only customers you have.  For example, a 2011 article in the Globe and Mail talked about this very issue in Canada and determined that while Bell was selling data to wholesale customers at $4.25 per 40GB block (that’s Canadian dollars and it breaks down to approximately $0.10 Canadian per gigabyte), yet the same service was costing consumers $1 or more per gigabyte.  I haven’t seen numbers for the costs in the US per gigabyte, but I am willing to bet that it’s not that much more.

So why don’t you do everyone a favor treat all your customers the same?  Quit dithering around on the usage cap terms and give us, the consumers that you claim to care about, actual choice in data plans.  It’s a crazy thing, but when you start treating your customers like people that are worth something then it’s just possible that you might not be vilified in the press every day.

And while we are at it, thanks oh so much for silently increasing my download rate from 50Mbps to 75Mbps.  I am sure that at some point in the future you will just up my rate to make up for the speed increase without actually changing my data cap. So yeah, thanks a lot for that.

And not that it matters, but yeah, that FCC complaint?  That was me.


Andrew Fore

Massive Numbers of Chrome Helper Messages in system logs

Today when attempting to figure out why Google Hangouts would not start on my Mac after the application was re-enabled due to a permissions change, I noticed a large number of messages like the following:

6/10/15 10:20:14.000 AM kernel[0]: Google Chrome He (map: 0xffffff804da160f0) triggered DYLD shared region unnest for map: 0xffffff804da160f0, region 0x7fff99a00000->0x7fff99c00000. While not abnormal for debuggers, this increases system memory footprint until the target exits.

After some research I found that this is a reported issue in the bug tracker for Chromium.  At first I thought that maybe this was the cause of the problem I was having but that turned out to not be the case, simply removing the Hangouts app in Chrome and re-adding it fixed my issue.  However, the sheer number of these errors makes the log a bit unwieldy.  It turns out that there is a way to hide all these messages (thanks to the commenter in the Chromium bug thread!):

sudo sysctl -w vm.shared_region_unnest_logging=0

While it doesn’t help at all with Chrome’s memory issues or other UI issues on Mac OS X, it is rather nice to hide all those spurious messages from the system log.

Checking your password expiration date

While logging into one of the Linux jump boxes at work today, it occurred to me that while I recently got notified about my password expiration from our Active Directory farm, I had no idea when my Linux password would expire or what the password life was.

To find out this information you can easily use the chage command.

Here is what the output looks like:

[user@myserver ~]$ chage -l user
Last password change : Apr 09, 2015
Password expires : Jul 08, 2015
Password inactive : never
Account expires : never
Minimum number of days between password change : 1
Maximum number of days between password change : 90
Number of days of warning before password expires : 7

It may seem like such a simple thing to do, but knowing when your password expires can be a lifesaver on occasion.

Windows Tip of the Week: Find your account password expiration date in an AD environment

Image of laptop with hand holding a skeleton key extending outwards through the display.In many cases your enterprise Active Directory will not involve too many domains, in fact it is quite common for an Active Directory implementation to only include one domain.  In some cases, however, when you have the unfortunate situation of having a username in multliple domains with differing policies on password expiration it is useful to be able to know when your password, or that of another user will expire.  Here is an easy way to accomplish this from the command line.

For the current active user

net user /domain

For a different user

net user /domain _username_here_

Here is an example of the output:

User name                    afore
Full Name                    Andrew Fore
User's comment
Country code                 000 (System Default)
Account active               Yes
Account expires              Never

Password last set            1/29/2015 4:38:37 PM
Password expires             4/29/2015 4:38:37 PM
Password changeable          1/29/2015 4:38:37 PM
Password required            Yes
User may change password     Yes

Workstations allowed         All
Logon script
User profile
Home directory
Last logon                   3/18/2015 3:27:55 PM

Logon hours allowed          All

Local Group Memberships
Global Group memberships     *VMWare Admins        *Domain Users

If you notice there is a lot of useful information regarding the user account here, but of particular interest in my situation was the value of Password expires since I was trying to ensure that I got my password reset prior to the policy setting so that I would not find myself locked out over the weekend that I went on call when the Helpdesk would be closed.

Solaris Tip of the Week: a better du experience

Graphic showing several nested command line terminal application windows.In my day job as a Systems Engineer I frequently find myself switching between different UNIX and Linux distributions.  While many of the commands exist on both sides of the aisle, I often find vast differences in the command line parameters that can be consumed by a given command when used in, for example, Linux vs Solaris.

Recently I came upon this again with the need to easily ferret out the majority consumer of drive space on a Solaris 10 system.  While we did have the xpg4 specification support available, the du command was still missing my favorite option “max-depth”.

In Linux I use this to limit the output to only the current directory level so that I don’t have to face to possibility of wading through a tremendously large listing of sub-directories to find the largest directory in the level I am in.  Unfortunately, in Solaris, even with xpg4, the du command doesn’t have this option, so my solution was to pipe the results through egrep and use that to filter out the sub-directories.

Here is some example output from a RedHat Linux 5.11 server:

[root@atl4cmweb01 var]# du -h
8.0K    ./games
8.0K    ./run/saslauthd
8.0K    ./run/lvm
8.0K    ./run/setrans
8.0K    ./run/ppp
8.0K    ./run/snmpd
4.0K    ./run/mysqld
8.0K    ./run/pm
8.0K    ./run/dbus
8.0K    ./run/nscd
8.0K    ./run/console
8.0K    ./run/sudo
8.0K    ./run/netreport
176K    ./run
8.0K    ./yp/binding
24K     ./yp
8.0K    ./lib/games
8.0K    ./lib/mysql
4.0K    ./lib/nfs/statd/sm.bak
8.0K    ./lib/nfs/statd/sm
24K     ./lib/nfs/statd
8.0K    ./lib/nfs/v4recovery
0       ./lib/nfs/rpc_pipefs/statd
0       ./lib/nfs/rpc_pipefs/portmap
0       ./lib/nfs/rpc_pipefs/nfs/clntf
0       ./lib/nfs/rpc_pipefs/nfs/clnt5
0       ./lib/nfs/rpc_pipefs/nfs/clnt0
0       ./lib/nfs/rpc_pipefs/nfs
0       ./lib/nfs/rpc_pipefs/mount
0       ./lib/nfs/rpc_pipefs/lockd
0       ./lib/nfs/rpc_pipefs
40K     ./lib/nfs
8.0K    ./lib/dhclient
8.0K    ./lib/iscsi/isns

Here is the same example ouput from the RedHat server using the max-depth option:

[root@atl4cmweb01 var]# du -h --max-depth=1
8.0K    ./games
176K    ./run
24K     ./yp
22M     ./lib
32K     ./empty
1.5G    ./log
12K     ./account
236K    ./opt
24K     ./db
8.0K    ./nis
2.9M    ./tmp
8.0K    ./tmp-webmanagement
40K     ./lock
8.0K    ./preserve
8.0K    ./racoon
16K     ./lost+found
1.4M    ./spool
8.0K    ./net-snmp
83M     ./cache
8.0K    ./local
1.6G    .

Here is the command example run without my egrep mod in Solaris 10:

[root@atl4sfsbatchb log]# /usr/xpg4/bin/du -h
  25K ./webconsole/console
  26K ./webconsole
   1K ./pool
   1K ./swupas
   2K ./ilomconfig
   1K ./current/ras1_sfsuperbatchb
   1K ./current/od1_atl4sfsuperbatchb
 4.3G ./current/ras1_atl4sfsbatchb
 2.1G ./current/od1_atl4sfsbatchb
 560K ./current/avs
   2K ./current/ebaps/output
 9.3M ./current/ebaps
 4.0M ./current/psh
 3.1M ./current/autoresponder
   5K ./current/fdms_download
  29K ./current/fdms_server
 109K ./current/fmt
   5K ./current/paris/output
 653K ./current/paris
   1K ./current/od1_sfsuperbatchb
  28K ./current/ccTemplateLoader
 633K ./current/ccTemplateLoaderLegacy
  15M ./current/whinvoices
   1K ./current/appmonitor.prod.netsol.com
 132M ./current/chase
 6.6G ./current
 160K ./archive/ccTemplateLoader
   1K ./archive/od1_atl4sfsuperbatchb
 4.9M ./archive/avs
   1K ./archive/ebaps/output
  26M ./archive/ebaps
 881M ./archive/psh
1014M ./archive/autoresponder
   1K ./archive/fdms_download
 6.8M ./archive/fdms_server
  21M ./archive/paris
   1K ./archive/ccTemplateLoaderLegacy
 4.1G ./archive/ras1_atl4sfsbatchb
 3.1G ./archive/od1_atl4sfsbatchb
 5.9G ./archive/chase
 102M ./archive/whinvoices
  15G ./archive
  22G .

And here is the improved command output using my egrep mod on the same Solaris server:

[root@atl4sfsbatchb log]# /usr/xpg4/bin/du -hx | egrep -v '.*/.*/.*'
  26K ./webconsole
   1K ./pool
   1K ./swupas
   2K ./ilomconfig
 6.6G ./current
  15G ./archive
  22G .

Desktop Google Chrome Reader Mode

If you are a Safari user then you are likely used to the “reader mode” which disables all the extra graphical stuff and focuses the view on the content of the article.  Thanks to a tip from Google Plus user Francois Beaufort, here’s how to enable it on the desktop (in Windows at the very least, I haven’t tried in any other OS).

If you’re on desktop, playing with it is as easy as running chrome with the –enable-dom-distiller switch. Once it’s done, you’ll notice a new “Distill page” menu item.

Hopefully this will make it to mainstream with a nice icon.