diff --git a/meeting-notes/2019/Q2/2019-04-17--ipns.md b/meeting-notes/2019/Q2/2019-04-17--ipns.md new file mode 100644 index 00000000..72a4bce4 --- /dev/null +++ b/meeting-notes/2019/Q2/2019-04-17--ipns.md @@ -0,0 +1,106 @@ +# Semi-Centralized IPNS Status April 17, 2019 + +- **Lead:** @aschmahmann +- **Notetaker:** + - @momack2 +- **Attendees:** + - @aschmahmann + - @satazor + - @vasco-santos + - @hugomrdias + - @jimpick + - @lidel + - @momack2 +- [**Zoom.us meeting URL**](https://protocol.zoom.us/j/868858751) +- [**Recording**](https://drive.google.com/open?id=1SFrWN_PJsTZYe_VkKhUHxCMa-4fzL2tW) + +## Agenda + +1. Start recording +2. Ask everyone to put their name into the list of attendees +3. Ask for a volunteer to take notes +4. Everyone can add items to this agenda for things they would like to discuss + +5. Collect list of current proposals/ongoing work (IPFS DNSlink, Cloudflare workers, IPNS-over-DNS, IPNS-over-Pubsub,...) +6. What happens when we have multiple IPNS networks? (i.e. we need to specify ipns:// the protocol) +7. Re-publish what can we do ? + +## Notes + +- Hugo + - Investigating short-term solution for IPFS camp where a fast IPNS would support fast IPFS-on-wb-browers and also package manager work + - doing proof of concepts using cloudflare workers + - it's distributed because it runs on the edges, but not fully decentralized + - did it in 1-2 days, easy to do + - 150ms round trip time to IPNS + - works like an API, the workers get the requests and either put record in key-value store or you get record from there + - when put validate record is valid, and then self-certify when get it + - not much trust of the code running in the workers + - biggest problem is if workers are blacklisted by cloudflare / removed, hard to replicate the database in other servers (need to change things like http addresses) + - think can get down to 100ms globally + - other option is to pick up IPNS over DNS + - https://github.com/ipfs/RFC/compare/rfc/5-ipns-over-dns + - think more likely to be accepted by community in the short term + - RFC made by lars, but when update DNSlink record on your domain, put IPNS record with domain instead of with hash + - make the "resolve" part mostly done - resolves recursively until finds the IPFS + - already working in go, still a little more js work + - publishing a little harder, need an authoritive server for some domain that is attributed to IPNS - then people do IPNS path through that domain until it resolves to an IPFS path + - could tell people they could deploy on their own servers, config name servers to point to their own servers + - with DNS already have global infra to rely on so when it works, it's fast + - "how does DNS figure our which servers are the authoritative ones?" + - IPFS provides one resolve (ex dweb.link), and if people use hash.dweb.link, know it points to desired thing + - only have to do this once even when have multiple versions - point the hash.dweb.link + - resolver will always listen on a well-known pubsub channel and when publish can go to that pubsub channel to publish updats + - resolver saves that + - won't necessarily need a domain for things like package managers because can use ipns path directly (websites need to have domain) + - important to be able to update directly via IPNS instead of constantly updating the DNS record (risky and doesn't allow things like github previews) + - DNS is fast to update, but the problem is TTLs, caches, and routers + - when request bubbles up through the DNS record, will know the authoritive domain is the one in the record + - republishing is the biggest problem + - could default to "dweb.link" when not explicitly specified +- Adin + - IPNS over pubsub - the current issue is it doesn't resolve on the initial hit - have to hit the DHT for initial resolve + - now want to ask peers directly and they echo it out every now and then + - working reasonably well, still needs some IPFS plumbing + - relative to DNS, going to be a little slower because have to do a DHT query to find the people interested in your address + - faster because don't need quorum, just need a person on the pubsub channel + - can be mitigated by things like rendezvous + - instead stores name map to peer id, push to you all the people you're looking for + - need to hit rendezvous and then the pubsub channel + - rendezvous not quite done yet, but the non-federated version seems to be closer + - things run in a couple of seconds, but the DHT adds a level of randomness for benchmarking (rendezvous) + - think can get around adversarial nodes if enough of them + - seems like might create more overhead with many many pubsub channels for every website you visit + - some folks use pre-configured pubsub as a performance improvement over a centralized service (could do the subscription after the first central lookup) +- right now have IPNS over DHT and IPNS over pubsub + - pubsub currently both published to DHT and pulls from there initially + - when add more ways to recieve a record, changing what it means to know where to query for default behavior + - want to be able to expriment, and standardizing this a single prototype will constrain that + - the DNS service option centralized overhead on our gateways (that are already in pain) + - being central authority for IPNS take downs a downside - if public gateway blocked then this is broken for the centralized support + - it dramatically simplifies the infra needed to NOT hit our gateway (easy to point record at own server - not in the DHT) +- what's the plan for something like ipld:// - how would you resolve that? + - thinking about it as an internal pathing scheme, not how you grab/store the blocks + - for browser integration, the browser would need to have that configured + - would prefer to leverage something that was already there +- want to specify the ipns protocol as we add more things + - like to always have DHT fallback (as last resort) + - maybe apart from DHT you can pick whatever makes sense from use case? + - continue discussion in issues + - the order should be configurable - defining the order won't always fit everyone's use cases (in the short term, DNS is always faster) + - what do you do with failures/inconsistency with updates +- Republishing + - unreasonable for people to rerun ipns publish every day for website publishing + - would a long TTL work? or 3rd party republishable + - should be able to keep a website alive even if can't publish a new version. should be able to keep the last version alive forever if people care - if you disappear your link to your website shouldn't go stale + - keeping an old signed record alive should work + - people publish content with low TTL, and other people who want it to be maintained will just republish to new record with longer ttl - doesn't prevent and just makes more annoying for users who want this + - in gossipsub can keep republishing, but in DHT you can't + - problem with setting TTL to infinity: many records in DHT, if trying to prove out freshness don't want the old record also floating around + - would it make sense for different implementations to have different end of lifes? + - currently the DHT won't actually support infinite TTL (drops it after 24hrs and doesn't allow republishing) + - there's a desire for freshness, but maybe we should allow republishing if within the TTL + - for DNS, infinite TTL would be less of an issue + - maybe the DHT would just drop them? +AIs: +- Adin: Will start issue in specs repo discussing this - if needed will set up another meeting