Speaker 0 00:00:04 Welcome to the rubric. I'm your host, Joe Andrew I'm, Erica Connell. And I'm Eric Shu.
Speaker 1 00:00:12 The rubric is here to help you understand that the technologies behind decentralized identity, including decentralized identifiers, also known as DIDs or did, did documents and did methods. Decentralized identity is a way to build robust identity based services without dependence on a trusted third party. Instead of being tied to a central service like Facebook or Google, or the department of motor vehicles did is, can be created by anyone anywhere and be used for any purpose did have associated, uh, did methods, which define how you turn it did into a document as well as how you create update and deactivate did. And their documents did documents provide cryptographic material that enables secure interactions with whoever controls the did different. Did methods use different underlying mechanisms with different performance, security and privacy? Trade-offs
Speaker 2 00:01:03 This show, the rubric reviews different did methods using a common set of criteria, comparing apples to apples so you can make better decisions about which did method is appropriate for your needs.
Speaker 1 00:01:15 Today. We'll be running a test of our new podcast system. This is a test repeat. This is only a test. If this were an actual podcast, this alert would be followed by actual content. And now these announcements,
Speaker 0 00:01:33 The rubric starts today. This is our premiere episode.
Speaker 1 00:01:36 We anticipate publishing it Friday, April 23rd. Thanks for joining us for the introduction of the rubric. Our next episode on the rubric, we'll be talking with the W3C is, did working group chairs, Dan Burnett, and Brent Zendall look for that in about two weeks after this one. Finally, if you'd like to share something with the community, you have your own announcement that our listeners might be interested in. Send us an [email protected]
that's [email protected]
. Our guests on the show today are us. Let me begin by introducing our hosts, Joe Andrew. Um, he is the founder and president of legendary requirements and a long-time contributor to the user centric and decentralized identity movements. Uh, Joe is the former co-chair of both the worldwide web consortium's credential community group and the Kantar initiative information sharing work group. He is the treasurer board member and producer of rebooting the web of trust and, and invited the expert in the W3C is decentralized identifier and verifiable credentials, working groups, where he is a co editor of both groups use case documents most related to our work here.
Speaker 1 00:02:56 He is also the lead author of the rubric for of did methods and co-editor of the did method rubric under development at the worldwide web consortium. So, Joe, let me ask you, how did you get into this space? It was through the internet identity workshop that I got invited to, uh, attend ID 2020 Aun summit discussing how decentralized identity might be able to address some of the world's largest problems. And immediately following that, I attended my first rebooting web of trust, which was the second event there. And it was the engagement with that community and those individuals that got me involved with the <inaudible> work in verifiable credentials. And then ultimately in decentralized identifiers,
Speaker 2 00:03:40 I have Eric shoe, a relatively new member of the legendary requirements team. He started his career as a professional software developer, and now works with legendary to help other developers better understand and document the requirements for their software. Welcome, Eric, why don't you tell us a little bit about how you got involved with legendary requirements and this identity space?
Speaker 1 00:04:03 Yeah, so my background, um, coming out of college in 2014, I graduated with a degree in applied and engineering physics from Cornell. Um, and right out of college found a job working at FLIR commercial systems where I did embedded software development, running the gambit of, uh, manufacturing tests all the way to developing video trackers, um, for infrared cameras, um, which is where I met Joe and worked with him for about five years. Um, right before COVID happened, I, uh, left FLIR and, uh, Joe informed me of legendary requirements. Um, and I decided to give it a shot. So I've been in the space as Eric mentioned fairly, um, for a fairly short amount of time, um, about nine months or so. Um, so still getting my feet wet, get getting into all the technologies, but, uh, it's been extremely interesting so far. It has been great having you on the team.
Speaker 1 00:05:00 Eric, I'm particularly excited that on this podcast, um, you're bringing the voice of someone who's technically sophisticated, but maybe new to did on the other side of the table, I'm excited to have Erica Connell. Um, she is a trained director and actor. She runs her own educational startup Wonderland stage and screen where she teaches theater and film to students. She is also a member of the W3C credentials community group, a volunteer with rebuilding the web of trust. And she works with legendary requirements bringing in artists and an outsider's perspective to our work. She's also producer of this podcast. So how about you, Erica? How'd you get involved?
Speaker 2 00:05:40 Well, as you know, so Joe, shortly after we met, you invited me to participate as an event coordinator for the rebooting, the web of trust event in Santa Barbara. I was really drawn in by the work being done there. Uh, not just the content of the work, which I think is really critical for society moving forward, but the collaboration of the participants, uh, people from all over the industry working together was really interesting and encouraging to me, um, watching a diverse group, striving for common goals and creating a shared language around identity. I was really inspired and wanted to get more involved.
Speaker 1 00:06:18 Thank you, Erica. It has been great having you on the legendary team, and I'm excited about having your voice on this podcast.
Speaker 2 00:06:25 Thank you, Joe. I am having a lot of fun on this podcast so far. Tell us why are you so passionate about decentralized identity?
Speaker 1 00:06:34 I'm passionate about decentralized identity because identity itself as a psychological and a social phenomenon is already decentralized. We've had names as long as we've had writing probably far longer than that, but our oldest, um, evidence of the written language, uh, has names in it. And when we started building bureaucratic information systems, we started centralizing it. So we have social security numbers from the social security administration. We get driver's license from the DNB. We have credit card numbers that are issued by credit card companies. That's a natural form of centralization just because the people who built the systems needed a system that would handle identity, but the core of identity predates all of that. And we didn't need a central authority to figure out who my mom is or what pet name to call my girlfriend, like our core number, our core, our core notion of identity predates all of this technology. And it was fundamentally decentralized. So I'd like to help us return to that.
Speaker 2 00:07:38 And what does it matter if something is centralized or decentralize?
Speaker 1 00:07:43 It's about power. So in the online world, centralization ended up as our default. Um, because we go to websites, we go to servers, we interact with these centers that have to keep track of identity for their own purposes. Like for me to send a message to someone on Twitter, Twitter needs to have a notion of who I'm sending it to. And that that notion doesn't have to be what's on Facebook, right? Um, but Facebook has its own notion in that model. Twitter's in control of my identity on Twitter and your identity on Twitter and Facebook is in charge of its identity over there. Um, but in the offline world, we're decentralized where we go, what we do, who we talk to, it's up to us, there isn't a central authority who tells us where we can shop, what currency we can use, who we can talk to.
Speaker 1 00:08:35 Um, a free society demands that we are free to be able to do live our lives as, as we want to, as long as it's not illegal causing harm to others, et cetera, which is part of the political conversation. But the technology conversation doesn't need, uh, this baked in central city that we happened to have gotten because of how the internet was built. So given that kind of as a foundation or groundwork for how we were thinking about identity, uh, going forward, um, how do DIDs then help us decentralize did provide a way for individuals to provide their own identity verification when interacting with someone, um, without needing to rely on a single centralized authority. So right now, if I log in with Facebook, I'm dependent on Facebook, um, acknowledging and recognize that I am, who I am with, with a did. If I log in or authenticate with a did that, I'm using cryptography to prove that I am the controller of that identifier.
Speaker 1 00:09:38 I am that person who is associated with this identifier and now the website can treat me, um, as I, as I'm logged in and we can apply this architecture, not just to logging into S a service, but also to credentials that could be issued by an authority and use somewhere else. For example, um, I could use a, did to get a verifiable credential, having proved that I control the did, and then present that credential to someone else. So I could go to the DMV, get a driver's license that's issued to my did. And then at another point in time, when I present that verifiable credential to someone, I can prove that because I control that did that I'm most likely the same person that that credential was issued to. So in this way, DIDs combined with VCs allow us to have this sort of identity assurance without relying on another party, such as Facebook or get hub, or what have you
Speaker 2 00:10:42 Does themselves decentralize
Speaker 1 00:10:45 Great question. DIDs are decentralized in two key ways. First each did specifies its own did method, right? And they identify itself. There's a method string. Um, that's right at the beginning. So it was, did colon method. That method string tells users who are using the did for some reason how to actually do that, how you create the dead, how you read the did update or deactivate. All those operations are specified by that method string. So anyone who wants to use did can use any did methods. You aren't required to use a particular approach, and frankly, anyone can create a new method, meaning there's no central authority that decides how did methods work. Now, most individuals aren't going to come up with their own custom did method. That's sort of not the approach, but if some new innovation or new idea, maybe it's, it's using biometrics or it's using quantum technologies or it's using a warp drive or whatever.
Speaker 1 00:11:45 Um, if there's a new way to approach a decentralized identifier, anyone can publish it, create it, and others can start using it. So that's, that's the first way that DIDs are decentralized. The second is that each did method might itself be decentralized. For example, the first, um, sort of batch of did methods that were created and, and DIDs were designed with this in mind, they were all using distributed ledgers. So DLTs distributed ledger technology is, uh, what blockchains are based off of. Um, it's what Bitcoin uses. If there are amuses, et cetera, these DLTs themselves are decentralized. So the fundamental storage mechanism or control mechanism of the method itself might be decentralized in the same way that Bitcoin is. However, the did methods can vary considerably with extremely different characteristics. Um, for example, did ki, uh, has no registry outside the did itself. So that's a radically different sort of approach than what you would have in a ledger based did. And there are other approaches, there are all sorts of ways that any particular did method might choose to decentralize. Okay,
Speaker 2 00:13:02 Awesome. I am really looking forward to the did work drive evaluation. That's going to be a fun one to record
Speaker 1 00:13:11 It's on my calendar and here
Speaker 2 00:13:13 We are on the rubric podcast, what's a rubric have to do with
Speaker 1 00:13:18 About two years ago, those of us working on making dessert reality realized that we didn't have a clean rigorous definition of decentralized when it comes to DIDs, which is kind of problematic as decentralized as in the name. So some of us had hoped that some definition of decentralized would allow us to make sure that any given did method is in fact, it's self decentralized. But as we tried to tease that out and come up with a definition, that could be a criteria. We realized the word means very different things to different people. Even though we had all come together to advance the standard on our own. So we came up with an approach based on a rubric instead of a definition, and a rubric allows evaluators to use a common framework for comparing different subjects. That's what a rubric does. It's a, it's a standard way to evaluate a category of subjects against certain criteria it's used in education.
Speaker 1 00:14:14 For example, to capture what students actually learned relative to a set of goals. So a rubric for a us history class might say something like, can the student name all 50 states, can the student correctly Neva Capitol of all 50 states? So those sorts of criteria, um, we have turned around and applied to, uh, did methods so that multiple people can evaluate multiple methods and have those evaluations be comparable in some way, or did methods. We wanted to support this variability in definitions of some use cases are going to benefit from a particular notion of decentralized where a different set of use cases really care about different aspects of decentralization. So the idea of that did method rubric was it would enable evaluators of did methods, people who were trying to choose one over the other to pick the criteria that is most relevant to them and use those criteria for a consistent evaluation over the methods they want to consider. Our first published rubric called the rubric for the decentralization of dead methods was written by a team of collaborators as part of rebuilding. The web of trust that work was adopted by the dead working group, and then expanded to cover any criteria, not just decentralization, but any criteria that might help someone pick the right did method for their use.
Speaker 1 00:15:45 So then why a podcast, as we were developing the written rubrics, it became evident that you really need a lot of expertise in both the rubric itself and the method that you're evaluating and do a proper evaluation. We started with six methods, um, and we really needed to read through the specs and often contact the creators of those methods to understand some of the deeper issues like how has their governance structure who's in charge, who's paying for what, what are the economics, some of those things aren't in the specification. So there was a huge learning curve, even for experts in this space in order to apply this rubric. And that's why we created the podcast because our clients and our clients, clients are constantly being called upon to answer the question of what did method should I use? So we're here to help people figure that out.
Speaker 1 00:16:39 We bring, did method, creators and users onto the show to explore how did methods work what's unique about different did methods and which methods might be suited to particular situations? Our hope is that first, anyone can listen to our podcast and get a smart, but inclusive discussion about a particular did method or particularly use case and come away with a better understanding of how their own business organization or project might be able to use DIDs. Second, our hope is that we can cycle through most, if not all of the current did methods to give an audio introduction that serves the community long-term as an anchor for exploring and understanding specific methods. So my hope is that listeners not only get a library of methods, specific discussion and insight, but also a broader understanding of how the rubric can be applied to any use case, especially their own.
Speaker 2 00:17:33 Okay. And you said anyone can create one. How, how many did methods are there?
Speaker 1 00:17:39 There are currently 92, you or 94 listed two of which have been deprecated. Um, so that's a lot and we, we seem be getting a new one every week or every couple of weeks.
Speaker 2 00:17:52 Yeah, that is a lot. Um, what's so important about did methods.
Speaker 1 00:17:58 I did method defines how you use the did that did itself is just a string of characters. It looks like a URL and a did method tells you how to turn that URL into getting the document, which tells us how we interact with the subject of that identifier. We call that resolution. Each did resolves to one and only did document that did document contains the information for secure interaction. So anyone who has a unique approach to securely manage the cryptographic identifiers that are at the core of the IDs can define and document and publish their own did method. Some of the more well-known methods available today are, did BTCR based on Bitcoin did verus one, or did V1, um, uh, which is a bespoke fit for purpose identity blockchain. And did I, on a recently released Bitcoin based did method from Microsoft
Speaker 2 00:18:57 And who is our intended audience? Is this for end users, developers, regulators, who are you trying to reach?
Speaker 1 00:19:05 I'm trying to reach the dead curious. So anyone who's interested in Dead's, uh, at this stage is likely to have, uh, uh, be a technically minded decision-maker. So, uh, either you're a developer trying to choose what you should do on your own software, or maybe you manage developers, you're a product manager, or you're a CEO, and you're trying to figure out, Hey, should our company get into this stuff? Um, well we hope we can, we can address that entire audience from developers to regulators, to CEOs. People would need to make decisions and understand did methods, but maybe who aren't directly implemented them.
Speaker 2 00:19:39 Okay. Let's switch gears.
Speaker 1 00:19:43 Eric. You're the newcomer yet. You're technically sophisticated. How hard has it been learning how DIDs work and how you might evaluate it did methods. Yeah. So in the, almost a year, since I've been in this space, I would say that, uh, conceptually, uh, did some did methods, I guess the concepts behind them come, at least for me fairly easily. And you can read through the did method specification. And I think it a pretty good idea of what the technology of digits trying to actualize in the world, but, uh, similar to all of the various blockchains that are in existence, the actualization of the specification and of the technology in the real world is extraordinarily complex. And specifically on the topic of evaluating a did method, the distributed ledger technology is almost as much important as there is the did and did method itself. So for instance, did, BTCR built on top of Bitcoin.
Speaker 1 00:20:49 There's going to be a lot of security, privacy, and other affordances that come out of the fact that the method creator chose the Bitcoin ledger as kind of the core of the method. And so to really evaluate, he did method, you need to understand some detail or perhaps a lot of detail, the underlying technology that it is built on top of. Um, so in that regard, uh, there's a lot of reading that is needed to really understand the way the technology is being used. So I would say conceptually, uh, it's been fairly smooth, but getting into how the technology is actually being used has been extraordinarily complex. So the devil is in the details. So Erica, as an artist and mom, how, how are you dealing with these technical details? What, how, how and why is this interesting to you?
Speaker 2 00:21:47 That's a great question. The technical details are in fact technical details to me, I don't understand all of them, but, um, when I was introduced to this community, what hooked me was first watching and then getting to contribute. However, smally to the conversations with the people, building the technology. We all use people creating the standards and influencing the way companies and governments and individuals deal with identity and privacy. And that is very interesting to me. Uh, and the folks in this space doing it in a rigorous, uh, deep, critical thinking kind of way, made me want to find out how to participate in that conversation.
Speaker 1 00:22:36 So do you honestly expect that you and your kids are going to be using DDS? Absolutely.
Speaker 2 00:22:42 For my kids, especially I think that we are marching towards a world for them. That will be very, um, to my mind, was media integrated, internet integrated and having kids in the internet age is really what brought this awareness to me about privacy issues. As a mom, protecting their privacy is really important to me and teaching them what they can do to protect their privacy into their futures is, is a big deal. So as we, as we move into, um, existing in a way that's more and more connected and more online, I think any, any individuals controlling their information is going to be critical.
Speaker 1 00:23:24 Let's wait for the rubric, Joe at a high level. Could you tell us how we use the rubric to evaluate it did method yes. Happy to at its simplest. A rubric is just a structured set of questions. So easiest way to think about it for dead methods. We suggest you pick the sub sets of criteria that are most important to your usage, your use case, and then ask those questions of the dead methods that have got your attention. You're probably not gonna apply to all of them, but you will have heard, you will have asked some folks about, uh, this, that and the other, or you hear a press release or, or seen an announcement. Um, so pick, pick a half a dozen pick one, apply the rubric by asking the questions about those methods.
Speaker 2 00:24:08 So the rubric is just a bunch of questions.
Speaker 1 00:24:11 Well, technically it's a set of criteria and each criteria has a question and a set of possible responses.
Speaker 2 00:24:19 Okay. Let's talk about a specific example.
Speaker 1 00:24:23 Sure. The first criteria and the did method rubric is about open contribution. Meaning how easy is it for someone to participate in deciding how this method works? The question is how open is participation in governance decisions? There were four possible answers, a B, C, and D. Um, a, anyone can participate in an open, fair process where all participants have equal opportunity to be heard and influence decisions. B anyone can comment and contribute to open debate, but decisions are ultimately made by closed group C debate is restricted to a selected, but known group and D debate is conducted in secret by an unknown group. So this is a spectrum of possible responses. A given did method may earn a B plus or a C minus, and these are not necessarily grades. Although I understand people will think about it that way, but on many of these cases, it's just trade-offs. So it may be that a is important to you, or it may be that C is important to you, or maybe that this isn't important at all this particular, uh, criteria, the relevance, and it's documented in the method rubric. The relevance is that governance determines how the rules of the underlying network are set and maintained. And the more parties there are able to contribute to governance debates, the more decentralized the governance. So the more you open up the governance, the more decentralized the system is. And that's our first criteria in the did method rubric.
Speaker 2 00:25:59 Okay. So you have, uh, this set of criteria that you're using you, um, answer all those questions and then what,
Speaker 1 00:26:09 Then you Colet and combine, and you get a coherent report card that for each criteria shows you how each of the different methods scored. And if one or the other is doing something exceptional or different, it's going to pop out. So you may have, uh, five of your six did methods, all got to be in this area and the sixth one, oh, I've got an a or a D it was, it was bizarrely different. So that'll pop out. And it's basically a way to bucket all of the apples into an apple bucket and compare the apples and, and the different criteria. You're comparing all the oranges in the orange bucket. And you're deciding do which types of fruits do I care about apples, oranges, bananas. And as you pull that all together, you get a picture of the fruits that you care about for the methods that you've been considering.
Speaker 2 00:27:03 Okay, Joe, you and Eric have been working with the rubric for a while now, what have you found so far?
Speaker 1 00:27:12 It's been interesting. We are in the midst of doing some work with the department of Homeland security, uh, on behalf of digital bizarre, who's one of their Silicon valley innovation program, uh, awardees and DHS is, is working with the federal government to try and accelerate these technologies so that they are suitable for use by federal agencies. And so we took the work done by digital bizarre for their verus one blockchain, um, and evaluated it as a way to pitch to department of Homeland security. Hey, this is a great method. And as part of that, we are opening the conversation to other members of that SVIP cohort to get their input and to get their methods. Also well-represented with the same set of questions, the same set of possible responses. So that at the end of the day, the department of Homeland security and other federal agencies who are evaluating this technology can look at, did verus one did web, did I on these are some of the methods being used by members of the cohort and DHS and others need a way to look at these, um, coherently.
Speaker 1 00:28:25 So we've, we've applied the rubric to that. Um, it took a long time. We started by taking a digital bizarre, uh, basically a sales pitch, their rationalization argument, the rationale they called it. Um, and we identify the criteria currently in the rubric that were highlighted by that pitch deck. Um, there were about 20, maybe 25, and then we also noted a bunch of bullet points that were in those presentations that we thought could be good criteria, but weren't in the existing did method rubric that since I happened to be one of the co-editors that document, I'm intimately familiar with the willingness of the team working on that to accept and get new criteria. So we're going to propose a bunch of new criteria that we've learned in this exercise, because a lot of the things that DHS cares about wasn't on our radar when we re wrote the first, uh, did method rubric. So we're going to incorporate that. And I think that's a lesson learned for anyone working with this, is that not only are you going to pick and choose the criteria that are appropriate to use, but feel free to make up criteria that meet your use case, as long as you use the same criteria across the methods you're looking at, you can get this benefit of comparing apples to apples and oranges to oranges.
Speaker 2 00:29:44 And Eric, how did that process go for you
Speaker 1 00:29:48 Overall? I'd say it went fairly smoothly if, uh, as Joe mentioned, uh, time consuming. Um, we definitely learned that, uh, the rubric as published today is, uh, incomplete, especially when looking at, uh, use cases focused on, um, federal government or government entities, um, utilizing this technology. Uh, there was a number of gaps that we found in the existing criteria, which, um, hopefully have been, uh, plugged to a certain extent at this point. Um, but, uh, looking forward to getting on to the next evaluation and hopefully getting our new criteria bubbled back up into the rubric itself.
Speaker 2 00:30:39 Awesome. And based on all of that, what is your first advice for did method evaluators, Joe,
Speaker 1 00:30:48 Hands down, don't be afraid to rewrite the rubric for your method. Um, Ferris one in particular has a different structure for how consensus is managed. It's not a proof of work like it's done on Bitcoin. It's not proof of stake as it's done on some other chains. Um, and so we needed to adjust the criteria in order to accurately reflect what makes verus one special. And a key point of all of this work is to be able to compare different methods, including what makes them special. Like the point here isn't that we've come up with a single way that homogenous homogenizes, uh, all the methods. So you can treat them as if they were just vanilla methods. We want to have the customization and what's unique and what's different that should bubble up. It should be surfaced. So feel free to create your own criteria and feel free to, uh, propose them back to the W3C speaking as one of the co-editors we want your input PRS accepted.
Speaker 2 00:31:59 Very good. And what about you, Eric? Any advice for did method evaluators?
Speaker 1 00:32:05 Yeah, I would just echo the importance of keeping your specific use case in mind while evaluating the method. Um, really the way this has just evolved in my mind, um, and evaluation is not only an evaluation of a did method, but it did method for the use case that you were evaluating it for. Um, in that, uh, you could have a single did method that you are evaluating once for say a bespoke, um, corporate, uh, it infrastructure replacement. Um, in that case, there's likely a lot of privacy concerns that you won't be as interested in, in the, from the criteria list. But if you're using that same method to say, build a decentralized Twitter, privacy concerns become massive as you expect, hopefully the whole world to use your product. So keeping the use case in mind while going through the evaluation, I think is a key component of getting a successful, uh, evaluation that is actually useful.
Speaker 2 00:33:14 Yes. Excellent. Thank you for that. And all of that brings us to the end of today's conversation.
Speaker 1 00:33:21 We would like to thank us ourselves. So Erica, Eric, thank you for helping me get this conversation started. Um, and this brief introduction to DIDs and to the rubric, I'm hoping this continues on as an ongoing conversation before the community and the industry. So thank you both. And thank you for our audience. Thank you for holding through this entire podcast. Um, we were glad we could have you, and we look forward to continuing the conversation. Next time we will be talking with the dead working group chairs, uh, Brent Zendell and Daniel Burnett about where DIDs are in the standards process, how the did method rubric fits into that and all things about the feature of DIDs. We hope you can join us.
Speaker 2 00:34:14 Also, don't forget to subscribe or sign up for the notifications from your favorite podcasting service to be alerted when our new episodes are released.