Comment on page
Our approach to scraping
Public records about the criminal legal system: police, courts, and jails.
Typically, it's only worth writing a scraper if you have a use case for the data already, and you can't easily download what you need. What question are you trying to answer?
Our target users are the thousands of people already using police data. We can support their work by connecting the community of PDAP volunteer scrapers with real, impactful projects. If you don't have your own ideas about what to scrape, you can find local groups working on the criminal legal system. They probably have data woes!
After hundreds of hours of user research, we have determined that these are how we will add value in the police data landscape.
We're still in the iteration and case study phase. If you want to learn something about the police, you can write a scraper to parse, normalize, or get deeper information from our Data Sources.
- 2.Share your extraction and what you learned in Discord.
- 3.We'll all learn about the criminal legal system from the experience, and brainstorm ways our tools could better facilitate your work.
- In our experience, if you can find someone interested in using the data, storage typically takes care of itself.
- It's not an immediate priority to make a big database to store scraped data in the same format. The main reason this isn't a priority: this is not what our users are asking us for. It's almost everyone's first thought when they hear about our project (us too). Our research tells us access, organization, and communication are the bottleneck for people using the data.
- Aggregation is incredibly complex, and involves more than just mapping properties. So much context is needed before data from two departments can be compared.
- Publishing and vouching for extracted data, and documenting its provenance so it can be audited, is a big project. We only want to undertake this work for data we know will be useful.
- It's not an immediate priority to automate the running of all the scrapers in our shared repo. The main reason: this is not what our users are asking us for. We plan to archive the sources, and facilitate sharing of scraper code. If we have a stable archive, scraping can be done on-demand.
- Scraping is hard work, and there are hundreds of thousands of potential data sources out there. For many applications, data doesn't even need to be processed to be useful—it just needs to be findable. We don't need to scrape things unless it's clearly adding value.