The Match Making:
Thanks to the brilliant work by the mentor and mentees from 2014, we got a massive response as soon as we opened the registration process this year. While the number of mentee applications grew rapidly, we were unable to get enough mentors and had to limit the mentee applications to 30. Of them, 8 were Python beginners, 5 were interested in web development, 13 in Data Science, and rest in Advanced Python. After some interwebs lobbying, some arm twisting mafia tactics, we finally managed to get 19 mentees hooked up with their mentors.
Based on my previous experience at pairing mentor and mentees, the relationship works out only if there is a common theme of interest between the two. To make the matching process easier, I focused on getting a full-text description of their background & end goals as well as their LinkedIn data. From what I heard last night from the mentors, the matches have clicked!
The Mentors’ Dinner:
As ChiPy organizers, we are incredibly grateful to these 19 mentors, who are devoting their time to help the Python community in Chicago. Last night’s dinner was a humble note of thanks to them. Set in the relaxed atmosphere of the pizzeria, stuffed with pizza and beer, it gave us an opportunity to talk and discuss how we can make the process more effective for both the mentor and mentees.
Trading of ideas and skills:
The one-to-one relationship of the mentor and mentee gives the mentee enough comfort for saying – “I don’t get it, please help!”. It takes away the fear of being judged, which is a problem in a traditional classroom type learning. But to be fair to the mentor, it is impossible for him/her to be master of everything Python and beyond. That is why we need to trade ideas and skills. Last time when one of the mentor/mentee pairs needed some help designing an RDBMS schema, one of the other mentors stepped in and helped them complete it much faster. Facilitating such collaboration brings out the best resources in the community. Keeping these in mind we have decided to use ChiPy’s meetup.com discussion threads to keep track of the progress of our mentor and mentee pairs. Here is the first thread introducing what the mentor and mentee are working on.
Some other points that came out of last night’s discussion:
- We were not able to find mentors for our Advanced Python track. Based on the feedback we decided to rebrand it to Python Performance Optimization for next time.
- Each mentor/mentee pair will be creating their own curriculum. Having a centralized repository of those will make them reusable
- Reaching out to Python shops in Chicago for mentors. The benefit of this is far reaching. If a company volunteers their experienced developers as mentors, it could serve like a free apprenticeship program and pave the way in recruiting interns, contractors and full time hires. Hat-tip to Catherine for this idea.
Lastly, I want to thank our sponsor – Computer Futures, for being such a gracious hosts. They are focused on helping Pythonistas find the best Python job that are out there. Thanks for seeing the value in what we are doing and hope we can continue to work together to help the Python community in Chicago.
If you are interested in learning more about being a mentor or a mentee, feel free to reach out to me. Join ChiPy’s meetup.com community to learn more about what’s next for the mentor and mentees.
If you stay in Chicago, have some interest in programming – you must have heard about the Chicago Python Users Group or Chipy. Founded by Brian Ray, it is one of the oldest tech group in the city and is a vibrant community that welcomes programmers of all skill levels. We meet on the second Thursday of every month at a new venue with some awesome talks, great food and a lot of enthusiasm about our favorite programming language. Other than talks on various language features and libraries, we have had language shootouts (putting Python on the line with other languages), programming puzzle night etc.
Chipy meetups are great to learn about new things and meet a lot of very smart people. Beginning this October, we are doing a one on one, three month mentorship program. Its completely free, and totally driven by the community. By building this one to one relationships through the mentorship program, we are trying to build a stronger community of Pythonistas in Chicago.
We have kept it open on how the M&M pairs want to interact, but as an overall goal we wanted the mentors to help the mentees with the following:
Scrum team. I was familiar with the Agile principles and practices
before joining the team, but doing Scrum hands on has been an eye
opener. Having said that, rarely a day has passed that I have not
experienced the impostor syndrome. I started imagining crazy scenarios
which would always end in someone saying “What? You don’t know what Y
means? Who hired this guy?”. All my hopes were hanging onto the “new
guy” card – but then that has its shelf-life.
After talking to some great Scrum coaches, Scrum masters and
experienced programmers, I have collected some tips that can be
applied to reduce the self-loathing when you are the new guy in a
Individuals and Interactions
What is at the end of a process decision, a bug, an obfuscated code commit or a failing test? It is a human being. If you really want to understand the why something is the way it is, you have to connect and communicate. While there are many ways to connect these days – a face-to-face introduction without a burning requirement seems to make future communication much easier. This means try to be the first person to give.
I was very excited to see the Python wizardry at my workplace and talked to my manager if we can host the Chicago Python user group. The proposal was met with great enthusiasm and we are hosting the February Chipy meet up.
We are better when we are connected. So don’t avoid workplace White Elephant parties or potlucks.
Offer help via pairing
In an alien code base, and little domain knowledge, even if you are an
algo-wiz-design-pattern-guru, you’ll not be able to check in bugfree
code. The in-house frameworks (you have to be very lucky to have good documentation accompanying them), coding standards, testing practices
will add to the learning curve. My Scrum master realized this early,
and offered a highly effective solution from Extreme Programming.
Offer to pair with a team member on a particular story even if you
have no-clue about it. Unfortunately neither you, nor the person you
are pairing with, will anticipate that you are going to slow
him/her down. So before you start working together, its better to make
sure he/she understands how familiar you are with the language,
module, frameworks and the business problem. If possible try to read
the tests before you start pairing. Get on the driver’s seat.
Effective pair programming is HARD. Specially if you have been playing solo for a long time. You’ll feel like sharing a steering wheel while driving a
car and being forced to drive in the slowest lane. But as Uncle Bob
would say, you can only build good software by going slow.
Keep track of your progress
This is something I having picked up after I started working out (love
my fitbit and Nike+). Peter Drucker said “if you can’t measure, you
can not manage it”. Some simple metrics that you’d find immediately usable:
– # of code commits
– # of code reviews done
– increase in test coverage
– # support tickets closed
– helping others on irc, (or mailing lists)
A moleskin notebook and a pen is your best friend when you are the new
guy. Personally, I unsatisfied with only digital or only paper and use a combination of both. Additionally, have a few white sheets at your desk that
people can scribble on to explain stuff when they are at your desk.
Identify one area that needs love
Unless you are playing with the Beatles, every team has an area that
needs some love. You’ll get to learn about them during your Sprint
retrospective, which is Scrum’s way of preventing broken windows.
Make an effort to develop an expertise in those areas, and try to help
the team get more productive. A good place to start is test coverage
and writing acceptance tests.
Those are some of the tips I have received in the last few
months. They are nothing super specific to Scrum or Agile for that
matter, but helpful in an extremely dynamic environment. In the end it
is a lot of common sense and desire to help your team mates. What do
an alarming national re-admittance rate(19%), with nearly 2 million Medicare beneficiaries getting readmitted in within 30 days of release each year costing $17.5 billion. It is not completely understood how to reduce the readmission rates as even highly ranked hospitals of the country have not been able to bring their rates down. hospitals have
While the Survey file contains survey data on 4606 hospitals across the country, after cleaning up missing values, “insufficient data”, “errors in data collection” the number of hospitals was down to 3573.
That settles the structured data. Lets talk about unstructured data. Consider a tweet from a user @TonyHernandez whose nephew recently had successful brain surgery at @Florida Hospital. Yes this one.
Normally, I’d use Python to do these kind of matching but since the course had evangelized sqlite3 for such join-s, I went that route. A minor point to note here is that case insensitive string matches in sqlite3 for the text data type need an additional “collate nocase” qualification while creating the table.
Next you want to see how many matches between the two datasets do you actually get.
Moreover, apart from the twitter handle rest of the data in the list was outdated. I needed an updated count of followers, friends, listed, retweets and favorites for these handles. A quick Twython did the trick.
Props to TwitterGoggles for such a nice tweet harvesting script written in Python 3.3. It allows you to run the script as jobs with list of handles and offers a very nice schema for storing tweets, hashtags, and all relevant logs.
Before I managed to submit the assignment, on two runs of TwitterGoggles I collected 21651 tweets from and to these hospitals, 10863 hashtags, 18447 mentions, and 8780 retweets from Medicare Aided Hospitals on Twitter.
Analysis: All this while, I was running with the hope that all would some how come together to form a story at the last moment. What made things even more difficult was the survey data was all in Likert Scale – and I could not think up some hardcore data science analysis for the merged data. However, my peers were extraordinarily generous to give me 20 points with the following insightful comments with the first comment nailing it.
peer 1 → The idea is promising, but the submission is clearly incomplete. Your objective is not clear: “finding patterns” is too vague as an objective. One could try to infer your objectives from the results, but you just build the dataset an don’t show nor explain how you intended to use it, not to mention any result. Although you mentioned time constraints maybe you should have considered a smaller project.
peer 2 → Very promising work, but it requires further development. It’s a pity that no analysis was made.
While there is a lot to be done I thought a quick tableau visualization of the data might be useful. Click here for an interactive version.
This tells how a particular patient feels about the care measure he(his nephew) received at the hospital. The sentiment of the tweet text, the hashtags, the retweet count, favorites count are simple yet powerful signals we can aggregate to get an idea about how the hospital is performing.
Next I got a list of hospitals that were on Twitter … thanks to the lovely folks who hand curated it. It was nicely html-ed making it easy to scrape into a Google Doc with one line of ImportXML(“http://ebennett.org/hsnl/hospitals-on-twitter/”, “//tr”). Unfortunately, the number of hospitals on twitter according to this list (779) is significantly less when compared to the total number of hospitals. But it is still a lot of human work to match the 3573 x 779 hospital names.
So we lose out 92% of the survey data and less than 8% of the hospitals we have data for were on twitter when this list was made. These 246 hospitals are definitely more proactive than the rest of the hospitals, so I already have a biased dataset. Shaks!
While the twitter api gives direct count of the friends, followers and listed, for other attributes I had to collect all the tweets that were made by these hospitals. Additionally, it is important to get the tweets that mention these hospitals on twitter.
Collecting such historic data means using the Twitter Search API and not the live Streaming API. The search API is not only more stringent as far as the rate limits are concerned, but it is thrift in terms of how many tweets it returns. Its meant to be relevant and current instead of being exhaustive.
peer 4 → I thought the project was well put together and organized. I was impressed with the use of github, amazon AWS, and google docs to share everything amongst the group. The project seems helpful to gather data from multiple sources that then can hopefully be used later to help figure out why the readmission rates are so high.
peer 6 → As a strength, this solution is well-documented and interesting. As a weakness, I would like to have seen a couple of visualizations.
So much has been written about Unicode and Python. But Unipain is the best. Although its ugly head surfaces at times, I’ve somehow got around unicode and python 2.7 problems and never given it the due respect it deserves. But a few months ago on a Sunday morning, I found myself in deep Unipain. This is an attempt of recalling how I got out of that mess.
My exploration in program source code analysis generally involves munging text files all day. Up until now, most projects, it has been text files with ASCII strings. Most of them came from open source projects with the code being written by developers who speak English. However, while working on Entrancer – we found the dataset that comes with TraceLab (a platform for software traceability research) contained source code from Italian developers. All my Python 2.7 scripts exception-ed miserably when they tried to chew on those files.
An example of one of the input files is here. What confused me more was that all of the *nix tools (sort, uniq etc) I had to access through Cygwin were happily operating on these input files, but the file utility appeared confused about the encoding.
After a bit of random googling, I found Chardet by David Cramer which guesses the encoding of text files.
So no help there. Why would Italian text be encoded in the Central European character set? RTFM-ing the codecs docs didn’t lead anywhere. Soon I had drifted to reading hackernews.
Ok. Luckily the Internet has made this – Character Encoding Recommendation for Languages. I tried all variants like 8859-1, 8859-3, 8859-9, and 8859-15 and all have similar reactions. Thankfully, Jeff Hinrichs on the Chicago Python Mailing list pointed out “If it is in fact looking like 8559-1 then you should be using cp-1252, that is what HTML5 does”. According to Wikipedia“This encoding is a superset of ISO 8859-1, but differs from the IANA’s ISO-8859-1 by using displayable characters rather than control characters in the 80 to 9F (hex) range.”
While this worked for the file of a particular dataset, soon enough another file started biting at the scripts with its encoding fangs. And at this point you find yourself asking Why does Python print unicode characters when the default encoding is ASCII? and your REPL hurls a joke at you!
You are heartbroken at your failure to appreciate the arcane reasons for choosing the file system encoding as UTF-8 and leaving the default string encoding as ascii. You try coaxing Python by telling your .dot profiles to use UTF-8,
export PYTHONIOENCODING=UTF-8 … but Python doesn’t care!
Almost by noon, you realize its time to let the inner purist go and whatever-works-ugly-hacker take over.
I vim /path/to/site.py +491-ed and changed the goddamned “ascii” to “utf-8” in the site.py file. In your heart you know this is the least elegant way of solving the problem, as it would break dictionary hashes, and this code should never be allowed to talk to other python systems expecting a default ascii encoding. But its too easy to revert back. If you are interested, this is the /path/to/Python27/Lib/site.py file in your system. Read more on why is this a bad idea.
And lo and behold! All problems solved. But a safe way to do this might be to beg Python directly at the bangline of your script as described here.
With Python encoding out of the way, it was MySql’s turn to come biting. We needed Wordnet for the Italian Language for our project and it uses MySql for storing the data. Though you have to get approval before using it, its free and the guys maintaining it are super helpful at helping you out.
While importing the data the first ouch was the following:
Well clearly it doesn’t understand the difference between acute and grave accent marks. Luckily MySql Workbench is verbose enough to tell you where it is getting things wrong:
This stackoverflow post says that you have do an
ALTER shcema – in MySQL workbench you can right click on the schema and find it on the menu. It drops you infront of a drop down to change the default encoding while importing.
But it was back to square one again: how do I know the encoding of these strings embedded in the sql statements? May be Chardet knows?
Nice. Next all you need is to find out what you should select to enable this charset – and luckily its in the official docs. Turns out, I needed
But unfortunately this did not change the auto-generated import sql statement that the MySql Workbench was using. It was still using –default-character-set=utf8.
Forget GUI! Back to command line. Under Plugins in MySql Workbench you’ll find “Start shell for MySQL Utilities” and you’ll be dropped to a shell where you can issue the above command with the password flag like this:
Note the error message saying it could not open the default file due to lack of file permissions, but that did not stop it from importing the data properly. Ok! works for me 😉
Finally everything tertiary was working. That means it was time to go back to writing the actual code!
Two hours of caffeine drenched brainstorming spitted out the following:
- I sketched out how the process might flow in two steps. We are down to a pretty bare minimum concept build which is ideal both for this class and for getting something up quickly so that we can test it.
- I set up a Twitter account for Lake Effect Ventures so that we can tweet about progress we are making.
- Andy is going to jot up a positioning statement and beef up the business model canvas for the concept
- Leandre will use these to complete our 2-slide initial submission for our deliverable for the next deadline
- Leandre will also use this to start to craft a presentation deck
- Benn will be working on the copy for the landing page that I started.
- Benn will also be crafting a logo in Photoshop (Alex, Zak, Sidi if any one of you is good with design Benn would appreciate the assistance there)
- We need to think of a name for the concept as well
We think it is a bit premature to start on the user stories right now given that we have a good idea of what we are gonna build. Charles and me are gonna start on that and look to have something complete from a Version 1.0 standpoint by mid next week barring no setbacks. We will look to craft the user stories once we complete the MVP and use them as structure for testing features and functionality (Zak stay tuned on this)
Benn and Andy will also be working on putting together a more formal customer survey so that we structure the interviews we are having and start to compile meaningful data which we will need going forward.
Its getting exciting ….
Incredible Networking: Collect names, emails of all folks you meet. Be very careful about who your friends and keep in touch – after all you become the average of the five people you spend your time with. Call them up – Its incredible what people will tell you over the phone. (This is something, I have always fallen short – I can hardly get beyond emails).
Carry Chessick, the founder and last CEO of restaurant.com once told me after his lecture session at UIC, that networking as it is perceived is worthless. When you meet people, make sure you finish off by saying “If I can be of any help to you, please do not hesitate to get in touch”. That’s the only way that business card will actually fetch you some benefit. I met a sales guy from SalesForce.com, some time back at Chicago Urban Geeks drink … who sent out a mail immediately after the introduction from his phone with a one line saying who he was, where we met, and that he’ll keep an eye on tech internship notices for me. Brilliant.
360-s: If you want to find information about some company, of course you Google. So lets say if you are gathering info about Google, you’ll also want to talk to their competitors Yahoo, Bing … and find what they are thinking. Then you triangulate all that information to get in a good position.
Coaching: Make sure there is some one will consistently give you advice on what’s going on in your workplace.
Mentoring: Having a very trusted person outside your work who can give advice is invaluable.
Time buddy: How do you make sure that you are doing good time management? Get a time buddy, compare your calendars on how you are spending time. Bill Gates does this Steve Balmer.
Another interesting practice I’ve read sometime back on Hackernews is communicating with team members in two short at regular intervals:
(1) What I did last week/day:
(2) What I’ll do next week/day:
As my dear friend Guru Devanla(https://github.com/gdevanla) would put it “Its all about setting expectations … and meeting them”!