.comment-link {margin-left:.6em;}

J. Daniel Ashton

www.flickr.com
This is a Flickr badge showing public photos from jdashton. Make your own badge here.
Whatever your hand finds to do, do it with all your might, —Ecclesiastes 9:10a NIV
The LORD God has told us what is right and what he demands:
"See that justice is done,
let mercy be your first concern,
and humbly obey your God." —Micah 6:8, CEV
With all your heart you must trust the LORD and not your own judgment.
Always let Him lead you, and He will clear the road for you to follow. —Proverbs 3:5,6 CEV

see also — My Homepage

My Photo
Name: Daniel Ashton
Location: Germantown, Maryland, United States


Any links with a dashed underscore probably point to Amazon.com

Monday, September 06, 2004

How to find a job…

…the technical way.

Over this holiday weekend I've started work on a set of programs to help me manage the interesting job listings in IBM's internal jobs database. I have a handful of minor gripes with the system: their automated search facility returns only the first five new jobs for any given search criteria, only five searches can be automated, and jobs that match more than one search agent will appear under both sets of results: duplication of data. In addition, the basic search feature only lets you search for three keywords (or phrases), and there's no simple way to see when you've already reviewed and (mentally) rejected a given job posting.

What I've written so far is a set of Perl scripts that connect to the site and retrieve the postings for a given set of search criteria. I've succeeded in getting valid HTML pages with working links back to the site, and in concatenating the multi-page results into one page. I also wrote a script to retrieve the list of jobs I've already applied for.

The next step is to store all these jobs in a local database, and then repeat the download for other search criteria, i.e. one search for linux, another for grid, another for work at home, et c. As I store the results of each search I can flag each job with the keywords it matched. After that I'll throw together a proxy through which I can retrieve each posting: the proxy can highlight the matching search terms and add No way, Maybe and Yes buttons. This will let me feel confident that I've reviewed all the listings for all the keywords I can think of, while avoiding the frustration of facing several hundred mostly duplicated entries each day.

There would be room here for an eager CS student to apply some AI learning techniques to each listing to help predict which ones I'll be most interested in seeing.

And, who knows, perhaps this project, or my writing about it, will itself prove beneficial to my career.

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home