ChiPy Python Mentorship Dinner March 2015

Chicago Python Users group Mentorship program for 2015 is officially live! It is a three month long program, where we pair up a new Pythonista with an experienced one to help them improve as developers. Encouraged by the success of last year, we decided to do it in a grander scale this time. Last night ChiPy and Computer Futures hosted a dinner for the mentors at Giordano’s Pizzeria to celebrate the kick off – deep dish, Chicago style!


The Match Making:

Thanks to the brilliant work by the mentor and mentees from 2014, we got a massive response as soon as we opened the registration process this year. While the number of mentee applications grew rapidly, we were unable to get enough mentors and had to limit the mentee applications to 30. Of them, 8 were Python beginners, 5 were interested in web development, 13 in Data Science, and rest in Advanced Python. After some interwebs lobbying, some  arm twisting mafia tactics, we finally managed to get 19 mentees hooked up with their mentors. 

Based on my previous experience at pairing mentor and mentees, the relationship works out only if there is a common theme of interest between the two. To make the matching process easier, I focused on getting a full-text description of their background & end goals as well as their LinkedIn data. From what I heard last night from the mentors, the matches have clicked!


The Mentors’ Dinner:
As ChiPy organizers, we are incredibly grateful to these 19 mentors, who are devoting their time to help the Python community in Chicago. Last night’s dinner was a humble note of thanks to them. Set in the relaxed atmosphere of the pizzeria, stuffed with pizza and beer, it gave us an opportunity to talk and discuss how we can make the process more effective for both the mentor and mentees

Trading of ideas and skills:
The one-to-one relationship of the mentor and mentee gives the mentee enough comfort for saying – “I don’t get it, please help!”. It takes away the fear of being judged, which is a problem in a traditional classroom type learning. But to be fair to the mentor, it is impossible for him/her to be master of everything Python and beyond. That is why we need to trade ideas and skills. Last time when one of the mentor/mentee pairs needed some help designing an RDBMS schema, one of the other mentors stepped in and helped them complete it much faster. Facilitating such collaboration brings out the best resources in the community. Keeping these in mind we have decided to use ChiPy’s meetup.com discussion threads to keep track of the progress of our mentor and mentee pairs. Here is the first thread introducing what the mentor and mentee are working on.

Some other points that came out of last night’s discussion:

  • We were not able to find mentors for our Advanced Python track. Based on the feedback we decided to rebrand it to Python Performance Optimization for next time.
  • Each mentor/mentee pair will be creating their own curriculum. Having a centralized repository of those will make them reusable
  • Reaching out to Python shops in Chicago for mentors. The benefit of this is far reaching. If a company volunteers their experienced developers as mentors, it could serve like a free apprenticeship program and pave the way in recruiting interns, contractors and full time hires. Hat-tip to Catherine for this idea.

Lastly, I want to thank our sponsor – Computer Futures, for being such a gracious hosts. They are focused on helping Pythonistas find the best Python job that are out there. Thanks for seeing the value in what we are doing and hope we can continue to work together to help the Python community in Chicago. 

If you are interested in learning more about being a mentor or a mentee, feel free to reach out to me. Join ChiPy’s meetup.com community to learn more about what’s next for the mentor and mentees. 



Chicago Python User Group Mentorship Program


If you stay in Chicago, have some interest in programming – you must have heard about the Chicago Python Users Group or Chipy. Founded by Brian Ray, it is one of the oldest tech group in the city and is a vibrant community that welcomes programmers of all skill levels. We meet on the second Thursday of every month at a new venue with some awesome talks, great food and a lot of enthusiasm about our favorite programming language. Other than talks on various language features and libraries, we have had language shootouts (putting Python on the line with other languages), programming puzzle night etc.

@Tathagata: Chicago Python user group at the super awesome new @braintree office! pic.twitter.com/JYmuYOd5Aj” ouaoooo
— Constantino Frydakis (@confryd) October 10, 2014


Chipy meetups are great to learn about new things and meet a lot of very smart people. Beginning this October, we are doing a one on one, three month mentorship program. Its completely free, and totally driven by the community. By building this one to one relationships through the mentorship program, we are trying to build a stronger community of Pythonistas in Chicago.


We have kept it open on how the M&M pairs want to interact, but as an overall goal we wanted the mentors to help the mentees with the following:

1. Selection of a list of topics that is doable in this time frame (October 2014 – January 2014)
2. Help the mentee with resources (pair programming, tools, articles, books etc) when they are stuck
3. Encourage the mentee to do more hands on coding and share their work publicly
It has been really amazing to see the level of enthusiasm among the M&M-s. I have been fortunate to play the role of a match maker – where I look into the background, level of expertise, topics of interests and availability for all M&M-s and try to find out an ideal pair. I’ve been collecting data at every juncture so that we can improve the program in later iterations.
Here are some aggregated data points till now:
Signups
# of mentors signed up: 15
# of mentees new to programming: 2
# of mentees new to Python: 16
# of mentee-s Advanced Python: 5
Total: 37
Assignment:
# of mentors with a mentee: 13
# of mentees new to programming with an assigned mentor:1
# of mentees new to Python with an assigned mentor:11
# of mentees with Advanced Python with an assigned mentor:1
Outstanding:
# of mentors for newbie mentees without an assignment: 2
# of mentees unreachable: 4
# of mentees new to programming without an assigned mentor:1 (unreachable)
# of mentees new to Python without an assigned mentor:2 (unreachable)
# of mentees with Advanced Python without an assigned mentor:4 (1 unreachable, 3 no advanced mentors)
Other points:
– Data analysis is the most common area of interest.
– # of female developers: 6
– # of students: 2 (1 high-school, 1 grad student)
All M&M pairs are currently busy figuring out what they want to achieve in the next three months and preparing a schedule. Advanced mentees, are forming a focused hack group to peer coach on advanced topics.
We are incredibly grateful to the mentors for their time and the enthusiasm that the mentees have shown for the program. While this year’s mentoring program is completely full, if you are interested in getting mentored in Python, check back in December. Similarly, if you want to mentor someone with your Python knowledge, please let me know. If you have any tips you would want to share on mentoring, being a smart mentee – please leave them in the comments – I’ll share them with the mentor and mentees. And lastly, any suggestions on what I can do to make the program beneficial for everyone would make me go:



Twitter Hospital Compare

While working on Coursera’s Introduction to Data Science course project, a few folks on the discussion forum started exploring the possibilities of performing some twitter data analysis for healthcare. There were a number of thought provoking discussions on what insights about healthcare can be mined from twitter, and I was reminded of a data set I had seen earlier.


Last Fall there was another Coursera course, Computing for Data Analysis, by Roger Peng, that I was auditing. One of its assignments required doing some statistical analysis on Medicare Aided hospitals. These hospitals have an alarming national re-admittance rate(19%), with nearly 2 million Medicare beneficiaries getting readmitted in within 30 days of release each year costing $17.5 billion. It is not completely understood how to reduce the readmission rates as even highly ranked hospitals of the country have not been able to bring their rates down.

Research Questions:
I agree they are all very rudimentary, but my understanding about this country’s complicated medical system is very limited. I know how to code, and take little steps at a time. Or so I thought.


Data description:
Composite Topics
While the Survey file contains survey data on 4606 hospitals across the country, after cleaning up missing values, “insufficient data”, “errors in data collection” the number of hospitals was down to 3573.
That settles the structured data. Lets talk about unstructured data. Consider a tweet from a user @TonyHernandez whose nephew recently had successful brain surgery at @Florida Hospital. Yes this one.
Normally, I’d use Python to do these kind of matching but since the course had evangelized sqlite3 for such join-s, I went that route. A minor point to note here is that case insensitive string matches in sqlite3 for the text data type need an additional “collate nocase” qualification while creating the table.
Next you want to see how many matches between the two datasets do you actually get.
Moreover, apart from the twitter handle rest of the data in the list was outdated. I needed an updated count of followers, friends, listed, retweets and favorites for these handles. A quick Twython did the trick.
Props to TwitterGoggles for such a nice tweet harvesting script written in Python 3.3. It allows you to run the script as jobs with list of handles and offers a very nice schema for storing tweets, hashtags, and all relevant logs.
Before I managed to submit the assignment, on two runs of TwitterGoggles I collected 21651 tweets from and to these hospitals, 10863 hashtags, 18447 mentions, and 8780 retweets from Medicare Aided Hospitals on Twitter.
Analysis: All this while, I was running with the hope that all would some how come together to form a story at the last moment. What made things even more difficult was the survey data was all in Likert Scale – and I could not think up some hardcore data science analysis for the merged data. However, my peers were extraordinarily generous to give me 20 points with the following insightful comments with the first comment nailing it.
peer 1 → The idea is promising, but the submission is clearly incomplete. Your objective is not clear: “finding patterns” is too vague as an objective. One could try to infer your objectives from the results, but you just build the dataset an don’t show nor explain how you intended to use it, not to mention any result. Although you mentioned time constraints maybe you should have considered a smaller project.
peer 2 → Very promising work, but it requires further development. It’s a pity that no analysis was made.
While there is a lot to be done I thought a quick tableau visualization of the data might be useful. Click here for an interactive version.


Among the various data sets available from HCAHPS, this one contains feedback about the hospitals obtained by surveying actual patients. I thought it would be interesting to study how patients and hospitals interact on twitter. 


Why do some hospitals that have more followers, more favorited tweets, or more retweets? Is it because of the quality of the care measure they provide? Is the number of twitter followers of a hospital effected by how the nurses and doctors communicate with their patients? Do patients feel good (sentiment analysis) when hospitals provide clean, quiet environment and cater immediate help on request? Would proper discharge information help get hospitals more twitter love?



The Survey of PatientsHospital Experiences HCAHPS.csv (here on referred to as the “survey”), contains the following fields:

Nurse Communication
Doctor Communication
Responsiveness of Hospital Staff
Pain Management
Communication About Medicines
Discharge Information
Individual Items
Cleanliness of Hospital Environment
Quietness of Hospital Environment
Global Items
Overall Rating of Hospital
Willingness to Recommend Hospital




7hr brain surgery, huge success! Big props to Dr Eric Trumble at #FloridaHospital #disney pavilion, a true ROCK STAR! Thanks 4 prayers!
— Tony Tightropes (@TonyHernandez) June 7, 2013

This tells how a particular patient feels about the care measure he(his nephew) received at the hospital. The sentiment of the tweet text, the hashtags, the retweet count, favorites count are simple yet powerful signals we can aggregate to get an idea about how the hospital is performing.


Next I got a list of hospitals that were on Twitter … thanks to the lovely folks who hand curated it. It was nicely html-ed making it easy to scrape  into a Google Doc with one line of ImportXML(“http://ebennett.org/hsnl/hospitals-on-twitter/”, “//tr”). Unfortunately, the number of hospitals on twitter according to this list (779) is significantly less when compared to the total number of hospitals. But it is still a lot of human work to match the 3573 x 779 hospital names.



So we lose out 92% of the survey data and less than 8% of the hospitals we have data for were on twitter when this list was made. These 246 hospitals are definitely more proactive than the rest of the hospitals, so I already have a biased dataset. Shaks!






While the twitter api gives direct count of the friends, followers and listed, for other attributes I had to collect all the tweets that were made by these hospitals. Additionally, it is important to get the tweets that mention these hospitals on twitter. 

Collecting such historic data means using the Twitter Search API and not the live Streaming API. The search API is not only more stringent as far as the rate limits are concerned, but it is thrift in terms of how many tweets it returns. Its meant to be relevant and current instead of being exhaustive.






peer 4 → I thought the project was well put together and organized. I was impressed with the use of github, amazon AWS, and google docs to share everything amongst the group. The project seems helpful to gather data from multiple sources that then can hopefully be used later to help figure out why the readmission rates are so high.

peer 6 → As a strength, this solution is well-documented and interesting. As a weakness, I would like to have seen a couple of visualizations.


It appears that hospitals on the east coast are far more active on Twitter when compared to the those on the West Coast. The data is here as a csv and the google doc spreadsheet.

Agony of Encoding on Python, MySql on Windows

So much has been written about Unicode and Python. But Unipain is the best. Although its ugly head surfaces at times, I’ve somehow got around unicode and python 2.7 problems and never given it the due respect it deserves. But a few months ago on a Sunday morning, I found myself in deep Unipain. This is an attempt of recalling how I got out of that mess.

My exploration in program source code analysis generally involves munging text files all day. Up until now, most projects, it has been text files with ASCII strings. Most of them came from open source projects with the code being written by developers who speak English. However, while working on Entrancer – we found the dataset that comes with TraceLab (a platform for software traceability research) contained source code from Italian developers. All my Python 2.7 scripts exception-ed miserably when they tried to chew on those files.

An example of one of the input files is here. What confused me more was that all of the *nix tools (sort, uniq etc) I had to access through Cygwin were happily operating on these input files, but the file utility appeared confused about the encoding.

After a bit of random googling, I found Chardet by David Cramer which guesses the encoding of text files.

So no help there. Why would Italian text be encoded in the Central European character set? RTFM-ing the codecs docs didn’t lead anywhere. Soon I had drifted to reading hackernews.

Ok. Luckily the Internet has made this – Character Encoding Recommendation for Languages. I tried all variants like 8859-1, 8859-3, 8859-9, and 8859-15 and all have similar reactions. Thankfully, Jeff Hinrichs on the Chicago Python Mailing list pointed out “If it is in fact looking like 8559-1 then you should be using cp-1252, that is what HTML5 does”. According to Wikipedia“This encoding is a superset of ISO 8859-1, but differs from the IANA’s ISO-8859-1 by using displayable characters rather than control characters in the 80 to 9F (hex) range.”

While this worked for the file of a particular dataset, soon enough another file started biting at the scripts with its encoding fangs. And at this point you find yourself asking Why does Python print unicode characters when the default encoding is ASCII? and your REPL hurls a joke at you!

You are heartbroken at your failure to appreciate the arcane reasons for choosing the file system encoding as UTF-8 and leaving the default string encoding as ascii. You try coaxing Python by telling your .dot profiles to use UTF-8, export PYTHONIOENCODING=UTF-8 … but Python doesn’t care!

Almost by noon, you realize its time to let the inner purist go and whatever-works-ugly-hacker take over.

I vim /path/to/site.py +491-ed and changed the goddamned “ascii” to “utf-8” in the site.py file. In your heart you know this is the least elegant way of solving the problem, as it would break dictionary hashes, and this code should never be allowed to talk to other python systems expecting a default ascii encoding. But its too easy to revert back. If you are interested, this is the /path/to/Python27/Lib/site.py file in your system. Read more on why is this a bad idea.

And lo and behold! All problems solved. But a safe way to do this might be to beg Python directly at the bangline of your script as described here.

With Python encoding out of the way, it was MySql’s turn to come biting. We needed Wordnet for the Italian Language for our project and it uses MySql for storing the data. Though you have to get approval before using it, its free and the guys maintaining it are super helpful at helping you out.

While importing the data the first ouch was the following:

Well clearly it doesn’t understand the difference between acute and grave accent marks. Luckily MySql Workbench is verbose enough to tell you where it is getting things wrong:

This stackoverflow post says that you have do an ALTER shcema – in MySQL workbench you can right click on the schema and find it on the menu. It drops you infront of a drop down to change the default encoding while importing.
But it was back to square one again: how do I know the encoding of these strings embedded in the sql statements? May be Chardet knows?

Nice. Next all you need is to find out what you should select to enable this charset – and luckily its in the official docs. Turns out, I needed latin2.

But unfortunately this did not change the auto-generated import sql statement that the MySql Workbench was using. It was still using –default-character-set=utf8.

Forget GUI! Back to command line. Under Plugins in MySql Workbench you’ll find “Start shell for MySQL Utilities” and you’ll be dropped to a shell where you can issue the above command with the password flag like this:

Note the error message saying it could not open the default file due to lack of file permissions, but that did not stop it from importing the data properly. Ok! works for me 😉

Finally everything tertiary was working. That means it was time to go back to writing the actual code!

Data loss protection for source code

Scopes of Data loss in SDLC
In a post Wikileaks age the software engineering companies should probably start sniffing their development artifacts to protect the customer’s interest. From requirement analysis document to the source code and beyond, different the software artifacts contain information that the clients will consider sensitive. The traditional development process has multiple points for potential data loss – external testing agencies, other software vendors, consulting agencies etc. Most software companies have security experts and/or business analysts redacting sensitive information from documents written in natural language. Source code is a bit different though.

A lot companies do have people looking into the source code for trademark infringements, copyright statements that do not adhere to established patterns, checking if previous copyright/credits are maintained, when applicable. Blackduck or, Coverity are nice tools to help you with that.

Ambitious goal

I am trying to do a study on data loss protection in source code – sensitive information or and quasi-identifiers that might have seeped into the code in the form of comments, variable names etc. The ambitious goal is detection of such leaks and automatically sanitize (probably replace all is enough) such source code and retain code comprehensibility at the same time.

To formulate a convincing case study with motivating examples I need to mine considerable code base and requirement specifications. But no software company would actually give you access to such artifacts. Moreover (academic) people who would evaluate the study are also expected to be lacking such facilities for reproducibility. So we turn towards Free/Open source softwares. Sourceforge.net, Github, Bitbucket, Google code – huge archives of robust softwares written by sharpest minds all over the globe. However there are two significant issues with using FOSS for such a study.

Sensitive information in FOSS code?

Firstly, what can be confidential in open source code? Majority of FOSS projects develop and thrive outside the corporate firewalls with out the need for hiding anything. So we might be looking for the needle in the wrong haystack. However, being able to define WHAT sensitive information is we can probably get around with it.

There are commercial products like Identity Finder that detect information like Social Security Numbers (SSNs), Credit/Debit Card Information (CCNs), Bank Account Information, any Custom Pattern or Sensitive Data in documents. Some more regex foo or should be good enough for detecting all such stuff …

#/bin/sh
SRC_DIR=$1
for i in `cat sensitive_terms_list.txt`;do
for j in `ls $SRC_DIR`; do cat $SRC_DIR$j | grep -EHn --color=always $i ; done
done


Documentation in FOSS

Secondly, the ‘release early, release often’ bits of FOSS make a structured software development model somewhat redundant. Who would want to write requirements docs, design docs when you just want to scratch the itch? The nearest in terms of design or, specification documentation would be projects which have adopted the Agile model (or, Scrum, say) of development. In other words, a model that mandates extensive requirements documentation be drawn up in the form of user stories and their ilk. being a trivial example.

Still Looking
What are some of the famous Free/Open Source projects that have considerable documentation closely resembling a traditional development model (or models accepted in closed source development)? I plan to build a catalog of such software projects so that it can serve as a reference for similar work that involve traceability in source code and requirements.

Possible places to look into: (WIP)
* Repositories mentioned above
* ACM/IEEE
* NSA, NASA, CERN

Would sincerely appreciate if you leave your thoughts, comments, poison fangs in the comments section … 🙂

Pesky tasks with batch scripts

Scripting is art. Nifty and subtle, wicked cool scripts can weave magic, and startle compiled language supporters with their skimpy appearance. But it is for getting yet-another-pesky-job done, that scripting becomes so important.
The batch scripting language, is the Windows equivalent(read wannabe) for the more sane bash scripting. Like many other products from Microsoft, it lacks elegance, is limited and does not have a good support for regular expressions. Below are some pesky jobs that can still be done with batch scripts.

Pesky job 1 : Map a network drive

net use N:| find “OK”
if errorlevel 1 net use N: \servernamepath$ ******** /user:******* /persistent:yes

This will check if the drive N is mapped or not; in case there is an error, it will map servernamepath with proper username/password values and keep this map persistent across reboots.

Pesky job 2 : Copying files with a time stamp
Say we want to copy a few files from one directory to another file to another with the current date stamp, it could be a simple
copy help.txt Desktop%date:~10,4%%date:~7,2%%date:~4,2%-chgs-1.txt

Truly ugly? Quite right.

Normally the date command would output

C:Documents and SettingsTatha>date
The current date is: Mon 11/17/2008
Enter the new date: (mm-dd-yy)

To use the date-stamp say in an echo statement, put the command with in percentage signs. to extract part of the time stamp, the command should be followed with a “:~offset, number_of_characters”. For example

C:Documents and SettingsTatha>echo %date:~0,14%
Mon 11/17/2008

So, the copy command above would create a copy the help.txt to the path C:Documents and SettingsTathaDesktop with a name 20081711-chgs-1.txt, on 17th November 2008.

But wait, this wont work in a Windows NT box. Seems like the automatic variables DATE and TIME were not implemented until windows 2000, so if you want a time stamp in an NT box you should

time /t >> file.txt

Pesky job 3 : Starting and stopping windows services gracefully
Another glitch when running newer bat scripts in Windows NT, that I came across is controlling Windows services. Consider the following snippet to stop a service named SomeAppServer or someappserver in a Windows Xp box.

net start | find “SomeAppServer”
if errorlevel 1 goto STOPPED
if errorlevel 0 echo %date% %time% Attempting to Stop SomeAppServer >> log.txt
start /wait net stop “SomeAppServer” >> log.txt 2>&1
if errorlevel 1 echo %date% %time% SomeAppServer could not be stopped >>log.txt
:STOPPED
echo %date% %time% SomeAppServer is stopped >> log.txt
echo — >> log.txt

However, in case the name of the service is someappserver, instead of SomeAppServer as written in the script, it would fail to stop the service in a Windows NT box. NT treats the service names as case sensitive and you need to supply exactly as it is listed.

Here are some good resources for batch scripting
http://www.robvanderwoude.com/batchcommands.html
http://weblogs.asp.net/jgalloway/archive/2006/11/20/top-10-dos-batch-tips-yes-dos-batch.aspx

Moving on

Life has not been that interesting to produce further gibberish adage for the last few months. At work I’m looking into a plethora of antediluvian technologies – but still putting up to learn the new ones.

My white paper titled Security Concerns with Web Services was warmly appreciated and got published our internal knowledge net. Though, I cannot publish it anywhere else … I surely can share the helpful tools that I used to detect web service vulnerabilities.

With the tools listed below, some imaginations and a desire to have fun – you can really have a good idea about web services security.

Tools for studying Web Services Security

  • WebGoat is an insecure J2EE application that provides a number of lessons for practicing commonly known security exploits.
  • Soap UI is a popular SOA and Web Services testing tool with a number offeatures like web service client code generation, mock serviceimplementation, and groovy scripting.
  • WS Fuzzer is a fuzzing penetration testing tool used against HTTP SOAP based web services. It tests numerous aspects (input validation, XML Parser, etc) of the SOAP target.
  • WebScarab is a framework for analysing applications that communicate using the HTTP and HTTPS protocols.
  • LiveHTTPHeader is a mozilla plugin that provides all the information about the browser traffic.
  • Cryptcat is a lightweight version of netcat with integrated transport encryption capabilities.
  • Fiddler is a HTTP Debugging Proxy which logs all HTTP traffic between your computer and the Internet. Fiddler allows you to inspect all HTTP Traffic, set breakpoints, and “fiddle” with incoming or outgoing data.
  • TcpMon is a utility that allows the user to monitor the messages passed along in TCP based conversation.
  • cURL is a tool to transfer data from or to a server, using one of the supported protocols (HTTP, HTTPS, FTP, FTPS, SCP, SFTP, TFTP, DICT, TELNET, LDAP or FILE). The command is designed to work without user interaction.

Most of the above tools comes with neat documentation, so have fun!

On loss and new beginning

“How does it feel
How does it feel

To be on your own

With no direction home

Like a complete unknown

Like a rolling stone?”

I lost it. I lost it all.
Three years of electronic ranting, tales of code, help, pride, use, abuse, love, hate, lies, videos, pdfs, – fuck, the list is endless! It surely justifies taking a sick leave …
Andrew Grove says Only the paranoid survives. But he never says getting hyper-paranoid for survival. Well, no regrets brother – just lessons.
If you happen to have no clue which loss I’m talking about – you hardly know me. Its my google account – I forgot the password for it. The big G is the spinal cord of your online existence – once you snap from it your gmail, blog, orkut, notebook, reader, docs everything refuses you as if you are some sort of a beguiler trying to steal the free services and be the next spam superstar!

Every loss makes you wiser. Its like a tool that refreshes the the old, and paves the way for the new change. So …

Turn the clock to zero, boss
The river’s wide, we’ll swim across
Started up a brand new day

It could happen to you – just like it happened to me
There’s simply no immunity – there’s no guarantee
I say love’s such a force – if you find yourself in it
And sometimes no reflection is there“