Make Vim more Python friendly…

Here are a few lines for the vimrc file that will make vim a little more Python friendly by providing autoindent.

syntax on
set autoindent
set smartindent
set tabstop=4
set shiftwidth=4

Update: After settings these, I found that copying text in to vim turned out to be a nightmare. So here is how you get around that.
Type:
:set paste
before pasting, and then type
:set nopaste
when you’re done to restore the vimrc settings.

Update: I am looking for ways to improve my vimrc even more, if you have any ideas, please let me know in the comments section. Thanks!

Posted in Technology | Leave a comment

Android vs iOS A Mini Review #2: Music

This post will definitely not be a “which is better” post, because it 100% depends on what your music situation.  They both can be just as decent depending on how you have your music library setup.

The “Cloud”

Cloud computing has dramatically changed the way we think about using consumer products.  Now, the cloud has been around for a long time, but the term didn’t really gain a lot of popularity until the last several years when companies really started embracing it.

Companies have brought us a new option for storing music by taking advantage of the cloud.  Google, Amazon, and Apple now offer music storage services on their own services, and as long as you have an internet connection you should be able to access your music.

Not all cloud music services are created equal however.  Apple charges 25 dollars a year for their music storage solution, but they claim one feature that Amazon and Google currently do not (as far as I know).  Apple’s match service allows a user to upload their iTunes library to Apple’s servers.  They will scan each song, and if they have that song in their library, they will provide you with a 256kb version over the version that you have.  If they don’t have that song in their system, then they will upload your copy.

Google and Amazon offer 5gb of free music storage and they will upload all of your music, providing they are DRM free.  I’m assuming if you bought music from their stores than they might be an exception.

I have my music, mostly, on all three services. Yes I paid Apple to store my music, but I’m also an iPhone user and have a MacBook Pro, plus a PC and thought it would come in handy.  I only used Amazon’s music service so that I could access my music from my Kindle.  It seemed to work fine. I wasn’t able to find a Google made music app for iPhone, but a third party app seemed to work alright as well. When I had my Droid X, there was a really decent Google Music app made by Google that I used frequently.  A plus to Google Music is the fact that it has a web interface, so you can access your music from any computer that has an internet browser.  There were times when the music would cut out on the website and just stop playing, but I wasn’t sure if that was Google Music or something on my system.

Apple’s iTunes match has no web interface, and requires you have iTunes or an iOS device to access your music in the cloud.  It was really nice to use on my Macbook since I had installed a 128gb SSD and really didn’t want to use up all of the space with my music library. I could just stream the music from the internet via iTunes.

iTunes Library

If you don’t have an iTunes library, have no desire to ever have one, will never have an iPhone, iPod, or iPad… skip this section, it probably will never apply to you.

Now if you do have an iTunes library, syncing with the latest iOS is really nice.  You can use iTunes match and never sync and just choose what songs you want to play and stream them, but personally I like having my library on the local device. (My iTunes library really isn’t that big so I can afford to use my iPhone’s storage). With iOS, iTunes can be synced either via wireless or via usb cable. There are some settings in iTunes but its simple.

Syncing with Android is possible, but when I first attempted it, it didn’t work very well.  That could have been because of my Droid X, I’m really not sure. Android users need to install a program on the computer with their iTunes library called DoubleTwist, and install the app on their phone. Also for like 5 dollars you can sync wirelessly.  This application will sync your iTunes drm free songs with your Android device.

Android Storage

If you don’t have iTunes but still want to store music locally, you can.  It’s as simple as copying and pasting to your phone. I won’t go in to too much detail as there are probably 1000 ways to do it.

Conclusion

If you have an iTunes library, then iOS… if you don’t have an iTunes library then Android, unless you take the cloud storage option. It all really depends on your situation, and neither is really better than the other in this case. No matter what, if you want music stored locally on your iOS device, you must have iTunes, while on Android there really is no application requirement.

Posted in Uncategorized | 1 Comment

The Problems With Our Weather Alert System

A lot of us can take the loud emergency sirens for granted when they go off to inform us that bad weather is near, but if you live in a small town like my parents you may not have that luxury.  Even in 2012 there are still small towns in the United States that don’t inform their citizens when danger is near.  A lot of these small towns have a siren that can be heard throughout town, but they feel it is better used to inform citizens when it is noon, rather than impending danger.

So what is the solution for these folks?  Well, a weather radio of course! Well, maybe not…  I’ve always been a fan of these little devices that go off whenever there is a watch, warning, or advisory but a lot of folks find them annoying, Especially when they go off at 3 o’clock in the morning to alert them of a Severe Thunderstorm Watch.  In all reality, who is really going to do anything for a tornado or thunderstorm watch.  Personally, I’m going to ignore it. These common annoyances have led some weather radio owners to simply unplug them so that they can get a decent night’s sleep. See the problem?  Although weather radios can be useful in a real emergency, there is a huge probability that they simply desensitize their owners to the point, that they don’t even get plugged in anymore. It would be like working in a building that had the fire alarm going off every 20 minutes.  You might jump up for the first couple alarms, but how many alarms have to go off before you say “screw it” and keep working.

Another issue with the weather radio paradigm is that you must subscribe your radio to a county (or multiple counties).  In places like the state of Iowa, weather warnings are not considered county wide.  They can activated for a portion of a county, however anyone subscribed to that county will receive the alert regardless of where they are located. Once again, causing a surplus of alerts.

So do you see the problems? The first is that there may be no alerting going on and the second is a alarm that is going off so often that folks don’t even want to pay attention anymore.

So what is the solution?  Honestly, I have no idea.

In this day and age, it seems like we should be able to subscribe to alerts for our own cities rather than counties, but it seems it could cause a major technological issue.

A small project that I have been working on, it getting alerts delivered via social media such as twitter and Facebook. I already have a script that runs and pulls data from the Weather Underground API, and then pushes it up to Twitter. The only problem is, I’m not sure that if there is a tornado/thunderstorm warning for a part of my county, if weather underground will post it for all cities within the county.  This would leave us stuck with the over alert issue.

Does anyone else have thoughts on this?

Posted in Personal, Technology | Leave a comment

Member Statements: An Annoyance of Converting to a Different Core Account Processor

A few years ago I was hired at a local credit union right before they were getting ready to convert to a different Core Account Processor.  My primary job at the time was to make sure all of the desktops were up to date so that the new software would work… My role changed a little after the conversion.

Getting the Member Statements from the Old Processor

We probably waited months before we received the statements from the old processor.  The format in which we received them was absolutely grotesque.  There was one file per year.  What I mean is that EVERY member statement for EVERY member for a specific year was in one text file.  These files were about 512 mb which meant that Notepad couldn’t open them, non of our employees could open them. In the beginning I had to install Notepad++ and order more RAM for my computer just so I could retrieve a statement for someone. The old processor had recommended that we pay several hundred dollars for a piece of software that should be able to open these files. That was not going to be an option for us. So it was decided I would use my computer science and python skills to try to parse these statements.

Parsing Those Member Statements

At first glance, using notepad++, these files had nothing that made it obvious when a statement was finished.  Also, important info like member name and account number was not always in the same place on the statement, so capturing that information was incredibly difficult. Since Notepad++ was also chugging trying to open these files, I installed gVim, which is an offshoot of linux’s vim, made for Windows.  gVim was able to see a lot more data in these files and I was able to find something placed in between every statement. At the end of each statement was a hidden escape character. I had something to parse on now!  So I wrote my python script to dump each statement to a file and named it by account number. This process still wasn’t easy. Getting those account numbers was extremely difficult. So I had to write a lot of code to constantly correct for the location of the account number in each statement. I named each statement in the following format, ####MMYYYY.txt where # equals the account number, M = month, and Y = year. Anyways, I was able to get over 1 million statements parsed.

You Have All These Files, Now What?

Once the files were parsed, I tried to make them available to the employees, so I wouldn’t have to constantly take in requests for a statement. So I placed them on our server, which was a big mistake.  Windows had a LOT of troubles trying to display that many files in Explorer.  I tried putting all of the files in folders by year and month, but it didn’t work.  The server would get laggy, people’s desktops would get laggy trying to load all of those files. So my solution?  Build a web app.

The Web Application: Version 1

I felt that IIS and Windows wasn’t going to be a good solution to start this, mainly because getting Windows to do CGI scripts in that past, was kind of a pain. So I found an old desktop and installed Ubuntu Server, Apache2, and Python.  I knew that writing a script to search the file system was going to be incredibly I/O heavy, especially for this desktop machine that was only a P4 and had 1GB of RAM, so I knew MySQL would have to come in to play for this.  So I wrote another Python script to go through all of the statements.  This script recorded each file’s account number, month, year, and file location, then uploaded that information to the database.

So to allow employees access to this information, I wrote a Python CGI script that would allow them to type in a member’s account number and select a year.  Then the script would display all of the available files for that search in a table, with a link to each file.

For security reasons, I setup Apache to authenticate with our Active Directory server, so if someone from outside, somehow managed to connect to our network, they wouldn’t be able to access those files without logging in.

Also, this server WAS NOT accessible from outside the credit union. You SHOULDNT DO THAT UNLESS YOU KNOW WHAT YOURE DOING.

This proved to work fairly well, but I never really liked it, it felt clunky, I didn’t feel it was secure enough, so a year and a half later, I upgraded it.

The Web Application: Version 2

Since the original version of the web application, I had learned more about MySQL and PHP from my computer science classes.  I decided to rewrite the entire application in PHP. This allowed me to add more features to the application, such as the ability for employees to upload more files for members, such as signature cards, etc. Since I already had the database setup, I just had to rewrite the code, to make the application look better, and also switching to PHP caused the application to actually run a lot faster than its Python counterpart.

For authentication, I was able to go away from using Apache’s authentication configuration and wrote the authentication right in the application. The application still authenticates with AD.  Another security feature I was able to add, was logging.  I now keep logs of who accesses what statements, and who fails at logging in (using a wrong password) and who successfully is able to log in. Since I had changed how the application logged in, I had to lock down the directories that stored the statements, so they wouldn’t be accessible from the web.  I had to change how the application provided the statements. The application now would have to read the requested statement and print it to the screen, rather than letting the users’ browser open a text file.

My solution is good for in house, and I would never feel comfortable placing it on the internet.  This is an intranet only solution, and in my opinion, the members statement is completely secure.

If you would like more information, or even my assistance in setting something up like this, feel free to email me at support@rychannel.com.

Posted in Uncategorized | Leave a comment

Enterasys, Python, Graphs, and Nagios?

Probably one of the more difficult things to do when you have an extremely large network, is map out the structure of that network. Especially if you want to monitor the devices on that make up that network.  Why would you need to know the structure of the network if you’re monitoring it? Well, If you’re monitoring 400 switches and 1 of those switches go down, but 125 switches uplinked from that 1 down switch, do you really want 126 notifications that you have down equipment?  I think not.  So what are your options? Well you could just be the ultimate IT Ninja and know exactly how your giant network is setup, you could manually go out and map out what switch uplinks to other switches, or you could turn to a little feature that has been a part of Enterasys’ switches for a long time, but more on that later.

Some Background Stuff About Icinga and Nagios

First of all, Icinga is a fork of the Nagios project. The process of configuring network devices is nearly if not totally identical between the two. In my setup I have each switch configured in its own directory, with its own config file for host, and a file for each service associated with that host. But you can organize the configurations however you would like.

Back to that Enterasys Thing

So, the Enterasys and Old Cabletron equipment have a feature called the Cabletron Discovery Protocol (CDP), not to be confused with the Cisco Discovery Protocol (which may or may not work the same way). Enterasys hasn’t dropped CDP even from their latest K series switches either. I can’t speak too much on how CDP works, so I’m just going to tell you how we’re using it to build our network configuration in Icinga.

Enterasys Netsight Console

From the Netsight Console we are able to load a flexview that displays CDP Neighbor Info.  CDP Neighbor Info, you ask? When you select a set of switches, this flexview will display all of their neighboring switches in a table.  A lot of the Cabletron DP data can be pulled via snmp, but I haven’t had time to write the script to do that just yet. That will be a future project.

Once you have the neighbor switch data you can export the flexview output to a CSV file that will be used by the next phase of this project.

You said something about Python?

Python has been my favorite programming language since I was forced to learn it during my first year of college (Thank You University of Northern Iowa). This first script that I wrote in Python creates a “graph” data structure. If you’re not sure what I’m talking about, click here. The original purpose of this script actually had nothing to do with Icinga, instead it was created to find the farthest end point switches on the network.

Back to the script, The Python Graphs are actually implemented using a Python Dictionary data structure.  Where each switch is its own Key while each key’s value is a python list of the switches neighbors.

For Example,

graph={'a':['b','d','f'],
            'b':['a'],
            'd':['b','a'],
            'f':['a']}

Once the graph is built, a shortest path algorithm is ran on each switch. In our case we find the shortest path from each switch to our core switch. In my opinion, I believe that you should actually find the shortest path to the switch that your icinga/nagios server is connected to, but in our case the icinga vm tends to move between several switches. Anyways, the shortest path algorithm I run, creates a list of lists, where each element in the list is a path. That path is display as a list of switches going to the core. Once that shortest path list is created, I use a nifty feature in Python called Pickle, that lets me dump that data structure to be used by any other scripts.

Quick thing about Pickling

When Pickling a data structure such as a list in Python, Python serializes the data structure and writes it out to a binary file.  This is in NO WAY SECURE.  Anyone who has write access to that file could potentially inject code in to it. However, the nice thing is, when I write another script that needs that same data structure, once I depickle the file, I have instant access to the data structure without having to rebuild the entire list of lists.

Building the Nagios config

This part is pretty straight forward.  Pretty much you write a script that outputs text files of your configuration, but you want your network layout to be part of the configuration.  Icinga & Nagios have a nice little setting that you can set in each device config called parent. That is where you identify who the the switch uplinks to. The best way to figure this out is to loop through that list of shorest paths you created in the previous phase. The important thing to remember is that in each path, there are only 2 switches that are important.  Also, I should mention that the shortest paths that were created have the core switch or whoever you were finding that path to, at the end of each path. Back to the 2 important switches.  The first switch in each path is the switch that you’ll write your configurations for, while the second switch in each path is the parent switch of the first.

So once you have the configurations built, Icinga will know how your network infrastructure is mapped. So if a switch with 20 switches below it go down. Icinga will note in the web interface the 20 other switches are unreachable, but say that the 1 switch went critical and it will only notify you that the one switch went down.

 

If you have any questions or comments, feel free to leave them in the comments below. If you would like to request some code, please email support@rychannel.com.

Posted in Technology | Tagged , , , , , , | Leave a comment