Web scraping
Last updated
Was this helpful?
Last updated
Was this helpful?
Public records about the police system, which can include sources from police, courts, and jails.
Scraping can turn cumbersome records into useful data. When someone wants to use records but they're in a difficult format, scraping is often the answer.
Typically, it's only worth writing a scraper if you have a use case for the data already, and you can't easily download what you need. What question are you trying to answer?
Our target users are the thousands of people already using police data. We can support their work by connecting the community of PDAP volunteer scrapers with real, impactful projects. If you don't have your own ideas about what to scrape, head to to see if any open requests catch your interests. You can also find local groups working on the criminal legal system.
After hundreds of hours of user research, we have determined that these are how we will add value in the police data landscape.
Track independently scraped data in our database. Prevent duplication of effort by showing people what's already out there. To submit data you've scraped, .
Connect people with web scraping skills to community members trying to make better use of police data without technical expertise. .
Build open-source tools in the to make running a scraper on-demand easier for people who don't know what "CLI" means.
Scrape data sources and agency metadata via our . Especially important are Data Sources with a record type of "List of Data Sources."
We're still in the iteration and case study phase. If you want to learn something about the police, you can write a scraper to parse, normalize, or get deeper information from our Data Sources.
Share your extraction and what you learned in Discord.
We'll all learn about the criminal legal system from the experience, and brainstorm ways our tools could better facilitate your work.
Repeat!
Data is most often useful in its own context, and scraped data is usually small enough to fit on free-tier hosting. After you publish a dataset, we can list it in our database!
It's not an immediate priority to make a big database to store scraped data in a normalized format. Comparing and combining data is its own research project. It's almost everyone's first thought when they hear about our project (ours too). Our research tells us access, organization, sharing, technical skills, and communication are the bottleneck for people using the data.
Aggregation is incredibly complex, and involves more than just mapping properties. So much context is needed before data from two departments can be compared.
Publishing and vouching for extracted data, and documenting its provenance so it can be audited, is a big project. We only want to undertake this work for data we know will be useful.
It's not an immediate priority to automate the running of all the scrapers in our shared repo. The main reason: this is not what our users are asking us for. We plan to archive the sources, and facilitate sharing of scraper code. If we have a stable archive, scraping can be done on-demand.
Scraping is hard work, and there are hundreds of thousands of potential data sources out there. For many applications, data doesn't even need to be processed to be usefulβit just needs to be findable. We don't need to scrape things unless it's clearly adding value.
If you don't have scraping skills, you can use the to find someone who may be able to help.
Run a Scraper you wrote, or one from the , to get an extraction.