July 27 Dolt Bounty retro

What went well?

    engagement & re-engagement with community + potential supporters
    iterated on our schema / planned and made some changes
    we filled out the agencies table to a high level of completion → a prerequisite to the datasets table
    excellent support from the Dolt team!
    learned that volunteer interest plays a big role in determining our focus
    got a start on refining the review process
    we split the bounty into two pieces so we had an opportunity to learn

What could have gone better?

    there's not currently an automated review process or script
    we discussed the difference between a dataset URL and an agency homepage
      people ended up focusing on the agency homepage URL more than the datasets
    one person ran away with the scoreboard by using machine readable lat/long

Next steps

    Katie to share some metrics on engagement
    Minimize variables + types of submissions allowed, particularly differences in scraped vs. manually gathered data
    Plan next bounty for ~8/25 start
      Formalize the fact that our team should do the reviews + scripts
      Smaller bounties help us iterate on a faster loop
      Look at attribution + scoreboard code and make contributions to how rewards are given out.
      Announce $50/participant (pending Katie + Dolt approval)

Notes for bounty success

    Pick a common schema for data (i.e. "incident reports")
    Allow people to submit data linking back to the source (i.e. dataset)
    Focus on discrete, scrapeable goals with relevant topics
      e.g. hospital bounty was in response to a law which passed which required hospitals to publish data
      national scope for a narrow data focus is a potential alternative to the narrow geo focus
        helps with schema normalization + big stories + big moves
        local is storytelling → action loop
    Focus on additions vs edits, enforceable by requiring properties
Last modified 2mo ago