This year WWDC was full of new stuffs. We could discover a lot of new features. I even believe that some of them are even more powerful that we could think at first impression.
By the time general public will discover those new APIs, I think a lot of developers will come out with really great apps and news usages.
This year all videos and presentations was available very soon after the presentation was made. Now that the WWDC is finished, all the videos are available for download.
Bash script for automatic download of WWDC 2013 conference videos
Just for the fun and the litle quick challenge of doing some scripting, I decided to do a quick and dirty script that automatically gather video sessions and associated presentations. Of course you will need to have Apple developer credentials to be able to download the documents.
I have tried to make it as simple as possible and posted it on GitHub. And as soon as shared there, I alread get prompted by Andy Lee for improvement. This is great and this is what I like with Open Source and contribution tools like GitHub.
So today, the script take your Apple Developer login as parameter, prompt for your password before downloading all WWDC 2013 videos and PDFs into a folder on your Desktop.
If for some reason you do not download it in one shot, re-run the script and it will not re-download what have already been downloaded.
Synthax: wwdcVideoPDFGet.sh <your-itunesconnect-login>
You can download the script from GitHub here!
I know this script is very perfectible, but this is a one shot script that just need to be quickly written so I skipped all smart trics to go for the dirty but quick ones … I’m sure you will note blame me.
Of course, like Andy did you are free to propose and share your improvments.
Extra version using curl in stead of wget
I have forgotten that wget does not come out of the box on Mac OS X! So if for some reason you don’t have wget, the only one attitude you should have is:
brew install wget
But if you don’t have Homebrew yet installed and don’t consider update your Mac OS X usage strategy right now (I recommend you to move to Homebrew), I guess you still have CURL installed?
Following Jason comment bellow I quickly port current downloader script while using curl in stead of wget. Curl version is also available on GitHub here.
reading at manbolo blog, it appears that installing Homebrew could be as simple as:
ruby -e "$(curl -fsSkL raw.github.com/mxcl/homebrew/go)"
You should get it now!
Getting excited with comments
Don’t hesitate to comment and suggest improvement.
Everytime I see someone commenting (like the blog post does help people), I suddenly get very excited. And it’s always a good excuse for me to escape from my normal live and do something geek! You know, some kind of nano vaccation (= coding) within day life (or night life)?
Anyway, thanks to Jason for pushing me to do the cURL version and thanx to Andy for improving the script.
Bash scripting is not robust enough for web scrapping
As a reminder, in general, it is not recommended to use bash to webscrap content. It is far less robust than using others tag parsers like for instance Python beautiful soup (like JC did also for getting WWDC 2013 videos and PDFs).
This script testing was more to see how hard it was to get web content using bash.
Discuss this on Hacker News