Slack Archive

To Index

Back To Top
<@U013SSREL0H> has joined the channel
<@U01384U5KM4> has joined the channel
robert.mitwicki
<@U013F46SDRR> has joined the channel
juan.caballero
<@U01304JUQP9> has joined the channel
carsten.stoecker
<@U013E8JS33L> has joined the channel
carsten.stoecker
<@U013SSREL0H> Should we discuss next steps with Shivam this week?
carsten.stoecker
Shivam started working on first JS KERI coding work: 1. As keripy is work in progress as well, what is the best pythoin baseline to use as a reference? 2. Also on the JS crypto libraries we would like to align with which we plan to work for kerijs.
carsten.stoecker
Should we align later this week?
carsten.stoecker
Hi Sam ... I just talked with Shivam ... Are you available next week for a call on KERI JS?
DIF now allows individual contributors without being members. As Rouven pointed out in an earlier email currently DIF members (Associate members) and individuals who signed the Feedback Agreement can participate in the work carried out by the WG. (please note, Feedback Agreement should be signed by individuals who only represent themselves and they could- and should not represent any of their employers. Preferably only freelancers)  Please send this form to the folks who already meet the individual contributor requirements  and ask them to complete the DocuSign sent after submitting the form.  Additionally, please connect me with the companies who are not individuals and let me share with them the WG charter and pre-explain them the IPR release etc. related to WG contributions so they can sign the release forms as soon as we figured out the legal aspects, as described below.    Related to your ask and the community's signals we started to look into ways to open up certain Working Groups to allow free company contributions as well. It requires additional legal and accounting work therefore it will take another 1-3 weeks until we can announce the changes. The current goal is to open up WGs for contributing companies for free. At least I can promise that we will make sure that even if not all WGs gets open IDWG will have the same participation arrangement as the Secure Data Storage WG.
juan.caballero
Has a separate time been set for KERI meetings or are just hijacking Markus' WG meetings at DIF? :smile:
<@U024KC347B4> has joined the channel
<@U024CJMG22J> has joined the channel
/github subscribe WebOfTrust/did-keri
:white_check_mark: Subscribed to . This channel will receive notifications for `issues`, `pulls`, `commits`, `releases`, `deployments`
/github subscribe WebOfTrust/keri-ietf-spec
:white_check_mark: Subscribed to . This channel will receive notifications for `issues`, `pulls`, `commits`, `releases`, `deployments`
/github subscribe WebOfTrust/kids
:white_check_mark: Subscribed to . This channel will receive notifications for `issues`, `pulls`, `commits`, `releases`, `deployments`
[WebOfTrust/did-keri] Issue opened by henkvancann
[WebOfTrust/keri-ietf-spec] Pull request opened by SmithSamuelM
[WebOfTrust/keri-ietf-spec] Pull request merged by SmithSamuelM
michal.pietrus
<@U02N7K951DW> has joined the channel
<@U02MLCPJ85A> has joined the channel
<@U02MD0HA7EJ> has joined the channel
<@U02N0K6LL93> has joined the channel
michal.pietrus
Hi guys, the `qry` msg defined in has `q` attribute which is the actual query payload. Is there anywhere a schema for this `q` attribute so in essence what shape it may have?
<@U02PA6UQ6BV> has joined the channel
michal.pietrus
welcome back <@U02PA6UQ6BV>
Hi <@U02PA6UQ6BV>!
Thx, Michal and Phil, I wasn't aware of this slack, sorry for my late arrival
robert.mitwicki
It is never too late :wink: Having all of you here I have interesting topic to address and trying to collect some materials, Security aspect of KERI based identity system. That is in the context of which showed some problems of Lissi - Indy based solution. Long story short they took it down as soon as they got "review" from CCC and showing that actually it was quite easy to break the system (Hijack subdomain - DNS, and do Mitm attack on didcomm)
robert.mitwicki
The idea is to create matrix showing to which types of the attacks keri based system is immune and how it compare to others
robert.mitwicki
similar matrix to the one which started some time ago on ACDC
robert.mitwicki
Any chance there is an English version of that article somewhere?
Let me translate it, a good way to warm up
robert.mitwicki
Since you don't have yet TDA my TDA is not able to provide you direct access to authentic data but here you go in old fashion way :stuck_out_tongue_winking_eye:
Looks good enough. The title is "Digital driving licence: a disgrace"
robert.mitwicki
If anyone would like to help creating such matrix feel free to throw ideas and content:
Yes, I am interested, for I could also involve the work I am supporting at BCC Torgap / DID:onion in the comparison
robert.mitwicki
When you adding new "Type of the system" Add it only in the first spreadsheet the rest would be automatically filled
OK
thomasclinganjones
<@U02Q3A81HA5> has joined the channel
This is a first draft of the KERI MOOC . Feel free to comment.
michal.pietrus
FYI Within DIF they continue to develop `did:keri` .
Q: Implementation maturity?
Implementation by Jolocom (in Rust) - implements the core
Other implementations - GO, Javascript
I'm stunned by the arrogance to not mention HCF or `keripy`
michal.pietrus
I am actually thinking to stop using DID's at all..? The last Mozilla response to DID's was not enthusiastic, especially the last paragraph about blockchains which in the KERI case is not applicable at all. Since `did:keri` will be made in DIF , it's difficult to say about compatibility or interop. Overall should work, but using `did:keri` in the spec (seen some in `ietf-cesr-proof`) may be confusing and leading to mistakes.
I just removed all references to `did:keri`from the CESR Proof spec
robert.mitwicki
robert.mitwicki
Maybe worth to discuss that on next KERI call what are implication
robert.mitwicki
especially this part:
<@U035D255M0R> has joined the channel
:wave:
Good Morning Folks- As we move to harden KERIpy for a production implementation of the vLEI ecosystem, we (mostly Sam) analyzed our approach to rotation events for distributed multisig identifiers. In particular, our support for partial rotations (rotating with only a subset of all signers that still satisfies the signature threshold) came under scrutiny. The result was the following issue and subsequent PR addressing the issue. In short, we changed the format for the `n` field and added an `nt` field for all establishment events to support partial rotations. The HackMD referenced in this issue has details of the new events and the issue itself details the logic needed to ensure a secure partial rotation.
The PR is also referenced in the issue if you want to see the implementation in KERIpy along with all the test vectors that broke as a result. :smirk:
<@U035JNUPF1V> has joined the channel
:wave:
<@U035WESCM0V> has joined the channel
<@U035ZHBF21H> has joined the channel
:wave: :wave:
:wave:
<@U035R1TFEET> has joined the channel
:wave:
:wave:
:wave:
<@U024CJMG22J>, here are my next set of questions, prefaced with assumptions: *Assumptions*: 1) Provenant platform will create a hosted cloud wallets in AWS for entity’s 2) Provenant platform will use delegated keys to act on behalf of the holders of the wallets *Questions*: 1) Since private keys need to be stored in an entity’s (hosted) wallet, should the entity’s inception event happen after the wallet is created and run on the server where the wallet is stored? 2) Does the Provenant platform, on behalf of an entity, need to supply the AID to the QVI for it to issue a vLEI to the entity? 3) Who makes the request for a vLEI? 4) When a QVI issues a vLEI to an entity, how does the entity receive the credential offer? 5) GLEIF uses a GLEIF delegated key to issue vLEIs to QVIs, does it also give QVIs a GLEIF delegated key or do QVIs use their own private key to sign credentials they issue? 6) Can a delegated key be created from a delegated key? Who approves the delegated key of a delegated key? 7) Does a request for a delegated key result in a new AID and pair of keys for the delegatee? If yes, does that mean that the delegatee needs a wallet? 8) Please confirm that pre-rotated keys and delegated keys are stored in the holder’s wallet in the secure key storage along with the private key 9) Does an entity with a Validating Agent role need a vLEI? Or does the Validating Agent only vet entities and work with LOUs to issue LEIs?
10) If a wallet is created before an inception event, how is the wallet associated with an entity? Is it by the friendly name of the wallet?
1. Yes, creation of an identifier occurs after the database and keystore is created. From the command line tool, you run `kli init` first and `kli incept` afterwards. 2. There are detailed flows for identity assurance and identifier exchange as part of the vLEI ecosystem. Those need to be followed to ensure secure communication on which the credential issuance is based. 3. There are designated representatives for a Legal Entity who are authorized to request any of the credentials from a QVI 4. There is a credential issuance peer to peer protocol define as a series of `exn` messages. The holder will receive a `exn` message with a `/credential/issue` route. 5. Private keys are never shared with KERI. Each QVI will create an identifier that is delegated from the GLEIF External AID. So the QVI has its own keys for its identifier with is delegated from GLEIF. 6. Yes, a new identifier can specify a key that is also a delegate as its delegator. So it would look like A -> B -> C where A is B's delegator and B is C's delegator. To authorize an establishment event of a delegate, a delegator needs to anchor the establishment event in either an interaction (`ixn`) or a rotation (`rot`) event. So in B's case, she could authorize C's events with an `ixn` event and not require any further action from A. But is B wants to use `rot` events to anchor C's events, those events would have to be verified by A with a corresponding anchor. 7. Yes, a delegate needs its own wallet and holds its own keys. The delegate initiates its own AID, specifying the AID to use as its delegator. 8. All private keys (current and next) are stored in the secure key storage, yes. 9. Validating Agents are part of LEIs and not vLEIs so they won't need vLEI credentials 10. A wallet is merely the database and keystore. An identifier created for that wallet has all its key material stored in the keystore and yes, it is given a user friendly name so human's can remember what it is later.
<@U024CJMG22J> thank you, can we meet briefly, when you have time, to go over your responses?
<@U036FDVV3GV> has joined the channel
:wave: Hi everyone!
:wave:
:wave:
:wave:
robert.mitwicki
FYI:
CEN/CLC/JTC19  has approved a  work item aiming to produce a Technical Specification with the following title: "Decentralised Identity Management Model based on Blockchain and other Distributed Ledgers Technologies. –Part 1: Generic Reference Framework".As per document N 103, the scope of the work item is the following: "The proposed standard will specify a reference architecture for decentralised identity management, optionally enabled by distributed ledger technology (DLT) and blockchain systems. The reference architecture aims to natural persons and legal entities and addresses concepts, cross-cutting aspects,   architectural considerations,  and architecture views,  including functional components,  roles, activities, and their relationships with blockchain and DLT.
Will they accept solutions that don’t need blockchain as a primary root of trust? :grinning:
I just pushed to the repo with all artifacts needed to verify the contained credential. All files are in a new `interop` directory. They all have the extension `.json` but most of them contain additional cryptographic material in the form of attachments so they won't validate as JSON files, but will parse in a proper KERI parser.
michal.pietrus
this is what we've done for Node.js via NAPI bindings. WEB would require different approach to make it working
Hi Janet, my name is Ruth Choueka, I work with Randy Warshaw. I was wondering if I could possibly ask you questions regarding the wireframes you shared with him? For instance, in the Profile/Contacts/Settings wireframes there are several wireframes for managing aliases. My understanding, from looking at the wireframes, is that an alias can be associated with a default credential, and a profile can have a default alias. Is that right? What is the purpose of different aliases, does each alias represent a different AID (inception event)? In the wireframes shown, the *default alias* value is the same as the *name* value, is that always the case? Regarding the witness URLs, the user has to enter them, he does not have the option of choosing them from a drop down list of possible witnesses?
Hi Ruth, We're actually working on revising those, so these are great questions! :relaxed:. We're likely going to go with last used over setting a default AID. Each one does represent a different AID. You can name it whatever you want, the name just seems like the more likely option.
We don't have a witness list yet, but a drop down is a possibility. I'll bring it up with the team. Thank you for your review and feedback :blush:
Thanks for your quick reply.
Glad to answer questions anytime!
What is a good first issue for me to start contributing?
I’m on page 57 of the 131 non-reference pages in the KERI whitepaper and feel ready to contribute. I will have the KERI whitepaper finished by the end of the week assuming all goes to plan. I want to apply my learning so I can anchor what I’ve read.
This is the easiest one I found: . However, it isn’t anything with KERI internals.
michal.pietrus
I believe the reference impl, so keripy, is a bit ahead of the whitepaper, especially regarding discovery and various types of additional messages that are not described in WP. As you're done with WP, you can take a look into the OOBI's spec Sam's shown at the end of the meeting
This one looks like a good start: . I’m not quite there theory wise though I’d like to start coding. Most of the other issues look like significant changes to my untrained eyes.
Great, thanks, that’s useful to know. I am reading all the specs so I can get up to date as well as looking for issues I can contribute to so I understand clearly the current state of the code.
Would you happen to have the link for the OOBI's spec ? <@U02N7K951DW>
michal.pietrus
<@U03BR5RNQTT> take a look . There are all the specs listed
michal.pietrus
<@U03EUG009MY> there's also CESR spec worth reading as it is the underlying protocol to stream the data across network participants. As opposed to OOBI's spec, this is a bit lower level, but might be worth to take a look of how it works.
michal.pietrus
The KERI WP + OOBI + CESR specs are currently best starting point to grasp all the concepts in KERI. Then ACDC comes if you're also interested
Yes, I’m very interested in ACDCs and scheduled my reading of that directly after the KERI WP. Do you recommend OOBI and CESR first before ACDC?
I got three articles. One in draft about OOBI, two final about autonomic identifiers more in general (and the why of it all) and CESR. comments welcome. Keri.one/resources is a great starting point too.
another one about CESR
Autonomic identifiers:
As a I said, comments more then welcome
Thank you!
That issue is likely a bit advanced for a first time contributor.
If I get a chance today, I'll try to create a few issues for enhancing our test coverage around the command line interface and the REST API. We have a pretty decent approach to creating those tests so it should be a good place for someone to start. It is also a glaring need we have.
Wonderful, yes, point me. I am glad to contribute.
Just finished the KERI whitepaper. <@U02N7K951DW> so OOBI, CESR, and ACDC next, in that order?
Here’s an interesting graph of the length of each of the specs indicated as dependencies in the ACDC spec, also including the ACDC spec itself. The page lengths include the tables of contents and the references section at the end of the documents so actual reading text page count will be less:
image3.png
Any issues you’d like me to start on?
Sorry <@U03EUG009MY> haven't had a chance yet. I'll try to create a few issues this evening.
I created issue
Please contact me if you need more information
Thank you, I’ll look it over now.
great, looks doable. I’ll ask questions as they come up.
Thanks <@U024CJMG22J>
michal.pietrus
makes sense <@U03EUG009MY> :thumbsup:
thanks
Now that the `keripy` development team has finished the big push to complete the vLEI XBRL Annual Report Signing Pilot (see press release here: ) we are updating the branches and development process to live up to our commitment to have a more stable `main` branch in our repository. We just renamed our primary branch from `master` to `main` and then created the `development` branch which will be used for all active development moving forward. Any PRs should be submitted against `development` and we, as a community will agree when it is time to cut a next release from `development` and merge all outstanding changes into `main`. We will commit to ensuring that all unit tests, demo scripts, command line commands and REST APIs will work as advertised at all times on the `main` branch moving forward. (There will be a series of PRs over the next week or so to get us there). If you have already cloned or forked the `keripy` repo, you can run the following commands to update your local copy of the primary branch from `master` to `main`.
git branch -m master main
git fetch origin
git branch -u origin/main main
git remote set-head origin -a
Of course changing `origin` to whatever your remote is named for the `WebOfTrust/keripy` remote repository.
I already had cloned the keripy repo. So I used github today to sync WebOfTrust/keripy to my remote repo. The first two commands run successfully to update my local repo.
git branch -m master main
git fetch origin
But the third:
hvc@Henks-Mbp20 keripy-1 % git branch -u origin/main main
error: the requested upstream branch ‘origin/main’ does not exist

Any idea what I do wrong?
michal.pietrus
<@U02PA6UQ6BV>, I cloned you repo and all is fine from my side
michal.pietrus
these commits:
424234c (HEAD -> main, origin/main, origin/HEAD) img alignment changed - 2
82aded6 img alignment changed
384ea51 typos corrected
e920883 all logo references added
ce720d3 md file typo corrected
0fffd6d logo files added and first ref to them
7c568b4 WebAuth and IPEX added
f6536c9 DIDComm added
741e5cf Git Flow, Github Actions added
072ec0a Bazel added
61ac372 NPM added
56a93f8 CAXE and Cesr4j added
32dc5f5 NTRU considerations added
bfe5fb6 Docker typos corrected
8f1cf38 Docker and NPM added, general reasons added.
d479c66 a few typos corrected
cfadc54 Initial fill of the file keri-dev-env.md
are your ones and are on top of `keri/main`
michal.pietrus
you can start PR in weboftrust/keri
Ouch, sorry, I made a mistake it’s not keri-1 but keripy-1.git that is relevant
michal.pietrus
you have old fork, you need to align it first if you'd to contribute to the weboftrust one
I saw it from the fact that my other documentation work popped up here in your report.
michal.pietrus
if you want to align with keripy from weboftrust:
cd keripy/
$ git remote add wot git@github.com:WebOfTrust/keripy
$ git fetch wot 
$ git checkout -b main
$ git reset --hard wot/main
michal.pietrus
if you have some local changes, make sure to commit them first (can be on master)
No prob, no local changes on this one :slightly_smiling_face:
Still puzzled. Won’t disturb you too long nor flood the main channel here. I’ll PM you
I’m trying to mentally solidify how the backer connection/configuration information along with the backer id list is updated/evolves in KERI. I read through the white paper regarding inception/rotation events and the configuration of witness list/threshold. but i’m still not convinced I can describe a controller porting their ledger backed KERL from one ledger to another (similar to pruning and adding new witnesses… but i think there is some nuanced differences). I am talking to <@U024CJMG22J> on Thursday, so if that is a better venue to discuss, i can be patient :)
Hi Lance, for what I’ve understood is that Ledger-registered backers are a secondary root-of-trust at best. It’s an extra guarantee. The primary root of trust is the KEL, which is cryptographically verifiable to this primary root-of-trust (PKI). Afaik KERI doesn’t organize portability between two ledger-registered backers (from one ledger to another), However, KERI is a portable autonomic identifier system in itself by design, and you’re free to use as many blockchains as you like to create secondary roots of trusts. But how efficient or useful that’ll be…. Phil definitely has a better thorough understanding of KERI and all of its technicalities. But this is my take on it, feel free to correct me if I am wrong (because I might well be :slightly_smiling_face: )
thank you <@U02PA6UQ6BV>. the glossary is really helpful for reconnecting to these terms, as I get to know them. we love the root-of-trust model of KERI to enable portable AIDs, for sure! I’m currently thinking through the ledger as a replacement for the witness network. if we do that (not just discovery nor hybrid model, like we discussed yesterday), then there will be configuration in the Inception Event of the AID that includes configuration for the ledger backer oracle connection information? If so, then on rotation i should be able to update my oracle list (similar to updating witnesses, and witness thresholds,etc). but i’m not certain if the backer configuration is separate from the backer ids and then how the mechanics of changing that during rotation might work. Hopefully I can address some of my misunderstandings when I talk to Phil about the GLEIF witness network. looking forward to it!
Interesting, Lance, I’ll follow your efforts. Here again my two cents: The problem with a ledger being the single root-of-trust for a witness (if I understand your idea correctly) is that its key are not rotatable. You’d need a transaction on the ledger to rotate the controlling key of the witness, which leaves you locked in: not portable anymore (locked to the silo / blockchain of this specific ledger), it’s become algoritmic identifier not an autonomic one. I believe KERI needs AID based witnesses to be able to offer KERI’s • security model: duplicity detection and reconciliation • features: portable identifiers, secure attribution over the web
Good points! I hadn’t spent much time unwinding the witness/backer AID and how that interplays with the user KEL Inception/Rotation Event configuration. Great concepts to consider and unwind. Looking forward to discussing more as I learn
> The problem with a ledger being the single root-of-trust for a witness (if I understand your idea correctly) is that its key are not rotatable. Thanks for drawing attention to this Henk. That makes sense.
Henk's proposed Glossary meeting: I'd like to attend and am avail Thurs/Fri at 10am ET
Same here
Me too
Great to hear. Could we settle on the same time as the KERI/ACDC meeting but then on Thursdays, starting from next Thursday July 28? In Europe: UK/WET 3PM and CET 4PM.
Bi-weekly will do.
I prefer Thursday morning, but can do either
Phill is not available I am afraid in his timezone at this time
Thursday morning is therefore problematic
Thu 10 am ET, same time as this calls, works for me
I'm usually in the Hyperledger Aries Javascript meeting on Thursdays @10am ET
Is that every week or alternating?
It’ll be alternating to start with
Thursday morning same time as KERI call works for me. So 7am PT.
Biweekly
:+1:
I’ll create an info page in the main Terms & Edu repo: with the usual links to Zoom, agenda, etc. I suggest to start NEXT Thursday 28 7am PT (4PM CEST) with this agenda: 1. Why are we doing the Terms & Edu call? 2. What are the main sources of info / concepts / terms currently for KERI/ACDC 3. How to sync glossaries with the outside world, a proposal 4. Ho to set up KERI/ACDC education at various levels of understanding. 5. <any other pressing questions / issues / suggestions ?>
I’m watching Stephan Wolf Webinar on GLEIF. ‘Accelerate trust on a global scale: Who is who’ in digital trade. Fantastic use cases. How did GLEIF and KERI find each other?
If there are any recordings/videos about the Human Colossus use case, i’d love to listen/watch that as well
XBRL demonstration with signer in different parts of the report is also great :)
GLEIF originally did a study many years ago about blockchain, with a view to making a recommendation to the RoC, they met Timothy Ruff while at Hyperledger global forum and decided to look into keri, got in touch with Sam and the rest they say is history
(A very abridged version)
great stuff, thank you
michal.pietrus
<@U035R1TFEET>, i am not sure there are any recordings, but we currently target: • society (ie. birth attestation) • pharma (clinical trails) • a use case for supply chain KERI is universal. Any kind of use case with multiple governed parties involved and where authentic data comes into play is applicable.
Very good. Recordings are nice because i can listen to them while coding :slightly_smiling_face: Really great to see these concrete use cases shaping the KERI ecosystem
michal.pietrus
multitasking :muscle:
Could you provide a link to Stephan Wolf Webinar on GLEIF?
Update on tomorrow’s Terminology and Education Zoom meeting: the draft Repo is here. As soon as I get the Zoom-link, I take the Repo in production under WebofTrust in github.
I have a collision which I can't avoid this week. Will there be a agenda/minutes doc (?)? Recording?
:
I was watching live. If I see something on Twitter or LinkedIn then I’ll post here.
We’re all set for tomorrow: Hope to see you on the, to become legendary :wink:, first Concepts, Terms and Education Zoom meeting.
Is there a meeting or zoom link for Thursday's call?
Yes, it’s all on the agenda page, like the other WebofTrust Zoom meetings:
Thank you, this is awesome, I'll see you there!
Join Zoom Meeting KERI Concepts and Terms Meeting ID: 898 9352 7631 Passcode: 197037
Great <@U03EUG009MY>
From an email this morning: As announced during the session, we will send you the webinar recording *in two weeks’ time*.
<@U02PA6UQ6BV> <@U024CJMG22J> Good job on the edu kick-off meeting! Thanks
Thx, I will come up with the meeting minutes within 24 hrs.
joseph.l.hunsaker
Unstoppable Domains. I wish I had enough to get there for KERI
Image from Android
Haha. Those are expensive!
joseph.l.hunsaker
I think the keri.dao is the most appropriate.
joseph.l.hunsaker
I hear a lot about ACDC.. as another group? can someone point me to that?
<@U03QRSUA87Q> Take a look at
joseph.l.hunsaker
And there is an ACDC meeting weekly as well?
yes
See the Zoom link in the Meeting Schedule section of the homepage.
Welcome <@U03RLLP2CR5>! I saw that you joined today, and I was just saying to <@U03P53FCYB1> today after the ACDC meeting, how great it would be to have you in the awesome KERI community :)
daniel.hardman
:wave: Hi everyone!
Welcome <@U03RLLP2CR5> ! Great to have you here
Hi Daniel, I have been quietly filling up “your” terms & glossary machine here -> . Remarks welcome.
daniel.hardman
Cool! I love that. I will go and review and contribute.
daniel.hardman
<@U02PA6UQ6BV>: I had a look at your terms wiki. Very impressive work, and I think this will be immensely valuable to the entire community. I turned on automatic glossary generation for the wiki, and configured it slightly. Here is the URL for the glossary that goes with your terms wiki: Notice that when you hover over a term, there's an icon at the left that allows you to link to a specific term. When you click the icon, it copies onto your clipboard a link to that term. For example, here's the term "digital signature": I noticed a few things about the data on the wiki pages that was making the glossary sub-optimal, so I fixed things here and there. Some things you might check if something doesn't render the way you want: 1. Images as displayed in raw wiki markdown have a different canvas size than images in the glossary. For this reason, it is often best to force images to be a specific size that will be good for the glossary. You can see how I've done this in the wiki page for authentic chained data container, for example:
daniel.hardman
2. The headers for additional sections of your wiki page should not be more indented than the header for Definition. If Definition begins with two hashes: `## Definition`, then subsequent sections should also begin with 2. If they don't, then the glossary generation may only display the more indented content.
Thx a lot, Daniel, I am still not done transferring the old glossary to the new one at TrustoverIP, but I will take account of your tips while doing it.
After completing this task, I will harvest the wiki data to create our own Jekyll version at WebofTrust.
Have a look at the kli interview and how it hopefully has become an integrated eduction resource :
<@U02PA6UQ6BV> Very nice work!
<@U02PA6UQ6BV> We should start using <#C03RB6ASVUM|edu> for this stuff
right
joseph.l.hunsaker
not sure if this can be part of the meeting tomorrow, but I would love to hear about the intersection of keri and soul bound tokens.
For one thing, KERI is a much less hyperbolic name than Soulbound Token. :smirk:
SBTs == DIDs, DeSoc == SSI. This is literally just a rebranding article as far as I can tell. And based on who "created" the idea I'm going to guess they will be bound to a blockchain...
+1, To me #SBTs are marketing bullshit of people that are working towards anchoring of identifiers on ledgers. They don't acknowledge more fundamental work already done by others and reinvent the wheel. Steps back, not forward.
joseph.l.hunsaker
I understand the antipathy. How is Keri being marketed? Marketing is important. I have been a part of some pathetic startups where marketing was non-existent. All of them failed as a result. There is no getting around the “popularity” of certain concepts whether we like them or not.
joseph.l.hunsaker
I am still a noob when it comes to all of this… I did not know that SSI was aimed at creating Decentralized Society. I thought it was a small part, namely the Identity structure. Which will help enable a Decentralized Society
Is anyone working on an open-source ACDC implementation?
ACDC is under Apache2 license as well as the implementation vLEI of GLEIF.
“ACDC is under Apache2 license” - you mean the spec?
“as well as the implementation of vLEI of GLEIF” - you mean vLEI is an impl of ACDC?
<@U03RTP08W3B> ACDC has been implemented inside of `keripy`. We have full credential issuance, revocation and streaming support in both the REST API as well as the command line.
If you look at the script `scripts/demo/vLEI/issue-xbrl-attestation.sh` it show cases creating a chain of ACDC credentials culminating in a data attestation (non-issuee credential) using the command line. It is a great place to start understanding the ACDC implementation inside `keripy`
Article in famous technical publication in the Netherlands (behind a login wall and in Dutch), by Krijn Soeteman who interviewed Sam. The English version will be available next week, without hurdles.
joseph.l.hunsaker
So, did I understand that we can be added to the W3C via a KERI organization?
No, Sam was mentioning the GLEIF, the organization that <@U024KC347B4> and I work for is non-profit and W3C accepts non-profits without a fee
joseph.l.hunsaker
Ok, so GLEIF members can be get W3C membership. Right?
GLEIF employees. We don't have “members” per se.
joseph.l.hunsaker
TIL: The W3C has some hefty membership fees
joseph.l.hunsaker
Makes sense
joseph.l.hunsaker
I am requesting that my company become a member
Looking at that table it appears that GLEIF would not be able to join for free.
I could buy a membership on my Non-for-profit vehicle . And then associate more people to Bird. There is a maximum of 10 employees; and currently I have none (would like to keep it this way (no one on the payroll please :wink: ). What is our interest? To have a voice within W3C quickly? Or can we take it slowly and steadily? I could opt for a membership from Oct 1. It’s 1950 euros for a year.
I think it is GLEIF's best interest to have a voice in the W3C to ensure that ACDCs become a conforming implementation of the W3C VC Data Model in v2.0. I'm going to push internally to ensure we join.
Agreed, but does this mean that non-GLEIF-ers can’t join W3C and the discussions there?
Oh no, not at all. I was simply trying to convey that I was opting to join with GLEIF and not take you up on your offer. Sorry if that was unclear.
Haha, language thing: My negation "sounded" critical. Let me ask it differently as it was meant to be: can people like me, still effectively participate in W3C discussions without a membership? And if not, could it still be a smart thing to do: gather people under a not for profit and use one small membership to get heard.
petteri.stenius
Hi all, thanks for having me here :wave:
petteri.stenius
I'm trying to navigate the Keri space. My interests are mainly keripy, Keep and vLEI. I've been playing around with these for some time now. I'm thinking of some a bit more generic vLEI related use cases like signing as an OOR or ECR and then verifying as a generic verifier. As this topic is not always self evident, what would be best channels to ask questions and share experiences?
<#C03RB6ASVUM|edu> is possible
Welcome <@U03U37DM125>
I do not believe you can participate without a membership. So yes, gathering a small group of people under a non-profit less than 10 is a good idea.
Are we in a hurry to get attention, or is per Oct 1 good enough? I think we could use our time until Oct 1 to organise behind the scenes.
Agreed. Oct 1 is fine.
Here’s the simple plan: I add experts’ github identifier to BIRD’s github page. Think we need motivated - but also capable experts to present the KERI suite ( :slightly_smiling_face: ) in W3C ranks.
That sounds like a good idea
kalin.nicolov
really excited to read this one when it is available!
Fantastic 2 part podcast on did:peer, and the upcoming KERI work is mentioned: And
Thank you for sharing. This is a valuable resource.
Do we have the recording from this morning’s meeting yet?
I want to go back and listen to the first 17 minutes I missed.
“Sam’s the DJ”, he records, takes notes and adds the recording to the meeting minutes in the agenda.
thanks
I listened to these a second time on my trip home. They are so good. <@U03RLLP2CR5> when do you think we’ll start discussing the Keri impl of did:peer in our meetings? I can’t wait to consider the Keri architecture in this context . Also maybe we’ll have a dedicated channel here on slack to discuss the impl? ....not to mention Didcomm implications...etc
Agreed. I’m excited to contribute to KERI + did:peer
rodolfo.miranda
I couldn’t find references of keri bridge on the whitepaper. Can someone point me to any documentation or reference if available? I think tbat it was mentioned that there are groups working on bridges to indy/sovrin and ethereum. Anything that help to understand more is appreciated. Thanks.
daniel.hardman
<@U024CJMG22J>, do you know where are AIDs formally specified? I see , but no content there. I don't see a repo that contains AID specs in IETF RFC format... I know that the term "AID" is formally defined in other specs, but I'm talking about guidance on what the structure/format and properties of an AID string must be.
AID has been used as an acronym for Autonomic IDentifier in general, at least so far. Which has led to the gloss. item
SAID as a part of the generalized AID family is define in
Yep, the glossary item of SAID here:
<@U03P53FCYB1> I'm not sure there is anything more concrete than
There was a good session from the last IIW but I do not have a copy of the recording.
daniel.hardman
Yes, I am aware of the SAID spec. It was just the AID that felt under-specified to me. Perhaps it doesn't really merit a spec. I see that Henk's definition references some verbiage about it being a CESR-encoded "qualified cryptographic primitive". Given that "CESR-encoded" conveys a wealth of requirements about structure, only the "qualified cryptographic primitive" phrase seems ambiguous.
What do we mean with an ’Attributable Identifier”? I think it’s a good idea to at least not create collision onto one abbreviation (‘AID’).
daniel.hardman
I suspect that the repo I found is stale and should be removed. If so, the we can ignore the collision as being just a historical curiosity that's been fixed.
Autonomic Identifiers have been pretty well described I think in this piece: Architectural types of identity systems; Image adapted, original by
I enhanced the glossary item AID
in the context of KERI and ACDC the term qualified is specific and I’ve added the link the to matching glossary term.
<@U03RLLP2CR5> I do believe that AID repo was a placeholder that we probably don't need any longer. And yes, the structure of an AID is documented in CESR, but I think someone would be hard pressed to understand that on the surface. Since CESR is an encoding format that is not specific to KERI, or identity for that matter, a real definition probably doesn't belong there. Perhaps as <@U024KC347B4> suggested we should add something to the SAID spec that specifically defines an AID.
Thoughts?
daniel.hardman
yes, I think that makes sense
Whatever definition we choose to prevail, we might need to diversify our glossary terminology, level* : autonomic identifier (AID) -> Phil Windley’s def, level*** is our internal exact definition that even the average SSI expert might have a hard time to get his/her head around. An autonomic identifier*** is an elaboration of autonomic identifier*? Put differently: A KERI AID is in the Phil Windley set?
I think so, but just to sync I ask the question.
After finishing both of these podcasts I again confirm they are both worth a listen or two
Has anyone sent out an invite for the Thursday CESROX meeting? If not then I will if I am able. Is this something that needs to be done by a WOT official account?
We also need to come to consensus on a time. My suggestion is 8:00 AM MDT since that is the same time slot for the KERI and ACDC meetings.
That is the same time and day of the EDU meetings (though you wouldn't know that I knew that as I've missed the last two :pensive: ) so scheduling CESROX on alternating Thursday's with the EDU meeting would be a good idea as fas as I'm concerned.
I'm not aware of an announcement.. composabke event streaming?
Yes, take a gander here:
and here: draft
sounds good, I’ll send the invite out tonight.
The meeting link for Thursday’s CESROX kickoff. This link also works as the repeating meeting link. Send me your email if you want to be on the calendar invite.
Here’s the meeting agenda:
The agenda is not publicly available yet?
Thanks, I thought it was public by default.
joseph.l.hunsaker
Sorry I missed the meeting this morning. We deployed our system to production last night, and a we had a critical system “accidentally” get deleted by one of our mid-level-devs.. :face_with_rolling_eyes: Still dealing with the aftermath.
I hate when that happens :smiley: Good luck
You mean one of your _former_ mid-level-devs?
<@U024CJMG22J> <@U024KC347B4> do we need to do for each commit for any WebOfTrust repo?
We don't require GPG signing keys, no.
Where is the corresponding directory for WebOfTrust?
The KIDS quickly feel out of date and we abandoned the concept for doing the IETF drafts.
The first thing you should read is ietf-cesr:
Yes, I read it a few months back, though it’s time I read it again now that I’m in the code for keriox.
Thanks for the update on KIDS.
daniel.hardman
For general interest, I found some good written material about how AIDs are built (the question I asked a few days back in this channel). For someone who already understands CESR deeply, the short description (a CESR-encoded public key) may be adequate. But for me, the extra detail was helpful:
Issuance and Presentation Exchange (IPEX) concept & terms in the glossary for review.
Screenshot 2022-09-06 at 11.32.11.png
how many different repo wikis do we have terms in?
Terms relevant in the (roll-out of) KERI watcher network (from the Zoom meeting today:
There is only one, called toip acdc/wiki, for historic reasons
Here are some of the DIDComm protocols
thanks
The function of the One is now to return to the source allowing a temporary dissemination of the code you carry, reinserting the prime programming...
In other words, the Neo branch has been merged into `development` in KERIpy
And the Keep `development` branch has been updated to reference the new well known witness AIDs (which have now changed do to the Neo update to CESR).
You will need to pull both Keep and KERIpy development branches together.
<@U02PA6UQ6BV> it's a quote from the Matrix
petteri.stenius
<@U024CJMG22J> did the AIDs for schema types change too?
<@U03U37DM125> The AIDs for the schemas have changed, but they will change again as we're updating the schema to support privacy preserving ACDCs - I'm hoping to have that work PRd this weekend
*SAIDs
My bad
Hahaha! I love it
The KERI One
Thanks for the update.
Can't attend today's zoom meeting. See you next week.
I missed as well. I’ll catch up with the recording.
I’m sure that I am just missing something obvious on Confluence. Where can I find recordings for past KERI & ACDC meetings?
<@U02N0K6LL93> the meeting page has previous recordings.
Although now I see that not every entry has one,.
We/Sam can reach out to the ToIP folks to see if we can get them and backfill the meeting page.
joseph.l.hunsaker
Where are the recordings posted?
Thanks, Kevin. Yeah I’m looking to get some of the more recent recordings :slightly_smiling_face:
I will miss tomorrow’s meeting since I’ll be at a conference. I look forward to catching the recording.
joseph.l.hunsaker
This seems related to Keri’s objectives
Hahaha! Fact check with DIDs and VCs. Precisely what we knew would happen. It’s time we also kept a record of who claims what is misinformation so we can interrogate the claimants as well as the claims. Reputation systems to the rescue.
I think it’s KERI & ACDC related. KERI for secure attribution over the web and ACDC for veracity (based on VCs). It’d be interesting to see if specific ACDC stuff like correlation prevention and step by step disclosure could be a killer feature for the CAI members. But indeed Kent brings up another interesting aspect: what is misinformation? Everybody needs to commit themselves to signing off content and schemas, also the CAI members themselves. We’ll see who’s the most obvious spreader of fake news, maybe the ones that shout the loudest for CAI :joy:
joseph.l.hunsaker
So, hypothetically, if something turns out NOT to be misinformation, it will “ding” the person that called out the misinformation? and all other decisions based on that NOT being misinformation?
Now wouldn’t that be fancy?
And, ideally, there would be a simple interface to know who does the dinging so you can take in all information and calculate your own trust score based on your personal criteria and assigned weight to any given event in a graph of trust events.
Personalize veracity assessment based on credentials
and being more sure than ever that what you got over the web is from the one in control over the identifier at the time of signing.
Precisely
Is KA2CE used in KERIpy? I didn’t see the string `ka2ce` or `kace` anywhere in the source. I might not be using the right search terms.
No, it has only been tested so far in KERIdemlia
KAACE and KA2CE are synonyms
Small update to the ietf-keri repo: - typo and heading syntax fixes
You are a wealth of knowledge here. Thanks for pointing this out.
joseph.l.hunsaker
Where are the ACDC recordings?
Hey <@U03QRSUA87Q> They are hosted by the Trust Over IP foundation. I have a request to them for the following weeks: 2022-06-07 2022-07-05 2022-07-19 2022-08-02 2022-08-16 2022-08-30 2022-09-13 2022-09-27 I followed up with Elisa today, but currently there isn't anything I can do, other than keep asking in Slack. I'll drop a note here when I get the response.
<@U02N0K6LL93> <@U03QRSUA87Q> I have been told that ToIP will have all the links on Monday of next week.
It is worth noting that while the ACDC core community, for the most part, is tightly coupled with KERI right now that the "official" home of ACDC is within ToIP and the "Technology Stack Working Group Authentic Chained Data Containers Task Force" (#tswg-acdc-tf). There is a toip slack for acdc and not all people who are in the official one are here - so conversations pertinent to ACDC should occur in toip slack.
(for the benefit of all)
daniel.hardman
Does anybody know if witnesses must share the same keripy instance (common set of mailboxes) to gossip KELs to one another?
rodolfo.miranda
as far as I understand, each witness is an independent keripy instance with tcp and http port each, and different keystores. Just last week I started my journey with keripy and I have tons of questions (some exposed in channel <#C025C56K2SC|keripy>)
Keep asking those questions in channels and/or in issues, Rodolfo, so I can document them as soon as the answers to them have crystallized.
Hey Folks, sorry about the delayed responses, Sam is on vacation, I am dealing with family issues and Kevin is taking up all the slack with GLEIF and our go live date of November 1st.
I'll give an over all witness description here and then try to answer specific questions where they were asked.
Each witness is independent and keeps local copies of all KELs that they know about. We have not implement gossip protocols yet so as an alternative we have implemented the policy that all controllers are responsible for propagating their key events to all their witnesses and then pass out all receipts from each witness to the other witnesses. In addition, the controller is also responsible for introducing witnesses to each other when a controller uses them together. This eliminates the need to bootstrap any kind of witness introduction while we don't have gossip implemented. Gossip is dependent on UDP protocol support which is dependent on CESR streaming of events themselves which Sam has started but not finished because revised rotation rules has taken precedence. Once we pop that stack and have gossip working, witness (and watcher) communication will be simplified. But for now, keripy controllers make sure to introduce all witnesses to each other (admittedly a little chatty), sends events to each witness, collect all receipts from each witness and then send them back out to the other witnesses.
As for HTTP vs TCP. Witnesses do expose a TCP port which will work fine for direct streaming of KERI events. However, the keripy agent and command line exclusively use CESR streaming over HTTP.
The reason is that we had to focus on HTTP in order to support mailboxes for controllers that do not have persistent Internet connection and that was built using HTTP.
So all the configuration we use currently favors HTTP for all communication
rodolfo.miranda
Thanks a lot <@U024CJMG22J> . I have no rush on any answer. I just throw here so other can catch up as well.
rodolfo.miranda
One more conceptual question regarding witnesses based on what you said: can a witness witness more than one agent at the same time?
Yes, the current witness implementation in keripy is promiscuous. As long as a controller commits to a witness in its KEL, the witness will provide a signed receipt. It is assumed that any deployment of witnesses will provide authentication over their witnesses to prevent them from witnessing for any AID.
rodolfo.miranda
Fair. At least for an implementation point of view I like the idea of having a witness validating multiple KELs.
Non-transferable identifiers and receipting key events are both so cheap, witness infrastructure can be deployed to support any configuration you need. Whether that is multiple AIDs using the same witness AID or one for one AID to witness AID, the protocol should scale to any level.
I imagine SAS witness services will discover multiple scaling models.
rodolfo.miranda
witness/watcher as a service is floating in my mind
How is `kli` installed?
rodolfo.miranda
when you install requirements with `pip install -r requirements.txt`, it actually works with setuptools packaging, and an entry_point is created for `kli` . At the end it's a python call to `keri.app.cli.kli:main`
I was looking for the following:
### Local installation
Once all dependencies are installed and working then run:

`$ python3 -m pip install -e ./`

Then you can run 

`$ kli version` 

to get a version string similar to the following: 

`0.6.7
`
Submitting a PR now…
New channel created for IIW coordination:
I meant to bring this up on today's call but did not get a chance. Does anyone know anything about, or have any experience with the following public key algorithm: GLEIF is being asked about supporting it in the vLEI ecosystem and my first reaction is, "that's a bad idea" but really don't know much about it.
michal.pietrus
Found a paper discussing this algorithm. There:
At present,
SM2 is also used in blind signatures, proxy
signatures, threshold cryptosystems and secure
cooperation between two parties. However, no one
has conducted research on SM2's batch verification
algorithm. This article proposes SM2 digital
signatures A batch verification algorithm for the
same signer and different signers.
Seems someone found it useful. However, due to the language barrier you might have difficulties to find anything about SM2. The published  on IETF is indeed in English, but would definitely require formal audit.
Than ks <@U02N7K951DW>
petteri.stenius
Hi all, I have a couple of questions about credentials 1. Are there plans for supporting querying credentials by attribute value or metadata parameters? For example letting a credential issuer query issued credentials by LEI code or by date of issuance? 2. Semantics of OOR and ECR authorization credentials. Who issues OOR and ECR authorization credentials and who is the recipient? What values will be set in personLegalName, officialRole and engagementContextRole attributes of the authorization credential? 3. Extending credential schema. For example how can I extend ECR credential schema with an email attribute? 4. Credential expiration. The roles that OOR and ECR credentials represent often have a known end date based on some business contract for example. Is there a way to set expiration date on these credentials? Thanks!
I want to know these answers as well. Good questions. Here’s my understanding from what the KERI whitepaper and IETF specs say: 1) querying credentials by metadata is supported and querying by value is part of the graduated disclosure mechanism so yes, support exists or is planned for both. 2)looks like VLEI specific credential types. Only a vLEI expert can answer completely. I suspect either a QVI or a company itself issues both of those types. Don’t take my word for it. I imagine the values are context specific. If you are looking for a type system to select from a distinct set of values there may be a specification somewhere that constrains those values. Otherwise the values are up to the issuer.
3) extending the credential schema should be doable by adding another ACDC linked to the original as ACDCs are an extensible graph structure. 4) there’s a way to set the expiration date on any credential since an expiration date is a regular property of a credential. Regardless of an expiration date you will want to verify control authority for a credential by verifying KELs.
My understanding is that only a QVI can issue an OOR, but it requires an AUTH credential () from the Legal Entity to the QVI; the Legal Entity must hold a valid Legal Entity vLEI credential.
I believe the ECR can be issued by either the Legal Entity or by a QVI if the Legal Entity issues an AUTH credential () to the QVI. In either case the Legal Entity must hold a valid Legal Entity vLEI credential.
petteri.stenius
Yes, the QVI is the only who can issue OOR credentials and the QVI can also issue ECR credentials. The QVI issued OOR/ECR credentials require OOR/ECR authorization credentials. I would like to understand the authorization use case. The OOR/ECR authorization credentials have basically the same attributes as the actual OOR/ECR credentials (personLegalName, officialRole and engagementContextRole). Is the LE authorizing the QVI to issue OOR/ECR credentials one at a time, where every QVI issued credential requires a separate authorization from the LE? I would have expected authorization contract to be on more generic level where the LE authorizes a QVI to issue any OOR/ECR credentials.
In this morning’s call about making JSON-LD `@context` optional, I thought I heard Sam say that the DID answer was to make it required and that this was a mistake. However, in the VC issue 947 I see comments like that indicate that the resolution was to NOT make it required. Am I misunderstanding?
andreialexandru98
Hey unfortunately I didn’t make it to todays meeting where can I find the recording?
rodolfo.miranda
ACDC are on ToIP page:
rodolfo.miranda
Sometimes Zoom recordings takes a while to be posted. Are you on the ToIP Slack?
andreialexandru98
I’m only on this slack!
same
rodolfo.miranda
they made it NOT required, but for JSON-LD
rodolfo.miranda
let me see how can slack invitations work..
I’m looking for the link on the wiki now
rodolfo.miranda
I think that you need to be a member. Anyone can be, but need to sign some docs.
ok, yes I just emailed **
Yes Trent, I just got to that part of the thread and as Rodolfo said `@context` is not required for plain JSON yet it is required for JSON-LD.
Does this look like an accurate representation of the schema field `s` in a KERI exn message? I’m looking for an example of an ACDC so I can compare it with other representations as I read through the `@context` argument.
image2.png
Here’s the overall diagram: I’m comparing `@context` to `no @context` to ACDC to JWT. This diagram is a first, early draft and is not complete. I’m just posting it here for reference for the questions I’m asking.
@context optionality.png
I pulled the KERI EXN message from `keripy/ref/Peer2PeerCredentials.md` and it didn’t have an `s` field so I added one. I know that might not be right, I’m just trying to illustrate what I understand of the ACDC spec. I copied the `$id` field from the schema in the `JSON Schema for VC Template` section in the same *.md* file.
The ACDC spec in section 3.2 says > Correspondingly, the value of the top-level schema, `s`, field MUST be the SAID included in the schema’s top-level `$id` field. Which is why I thought the `s` field could be added to the `exn` message’s `vc` section to illustrate where the value of the `$id` field would go. I’m sure I got something wrong, though is this headed in the right direction?
Thanks for posting this <@U03EUG009MY>. We’ve been learning a ton by watching the W3C thread and from <@U03RLLP2CR5> <@U013SSREL0H> Drummond etc. I’m hoping the community finds a win/win compromise. And look forward to discussing with the KERI community more
For those tracking the `main` branch of keripy, we just pushed all the changes from the latest development branch into main. The major change for this upgrade was the Neo changes that create stable transformations on CESR. Also included is work in the Hab and Matter classes towards support for the new revised rotation rules to more fully support custodial key management. Finally, a bunch of upgrades to the `kli` commands are included.
daniel.hardman
I think we tried to summarize too soon. Right now, `@context` is required in all cases. Issue 947 is a request to make it not required for plain JSON. However, it looks like this issue is going to be closed by denying the request. That would mean it is always required. :disappointed:
How disappointing. It was an illuminating thread even if only to be close how committed the W3C is to RDF. It would be great if the spec was relabeled to JSON-LD Credentials. I have some catching up to do. I got to your comment last Tuesday, Daniel, and didn’t get much further. It was a long read to get to that point since I had to go read up on a few things to give me context, no pun intended.
daniel.hardman
That pun is so a-propos. :slightly_smiling_face: Yes, I proposed that the spec should be renamed "VC-LD"
daniel.hardman
But that's not going to happen.
rodolfo.miranda
Is it possible to add an anchored seal `a` at Inception? Seems that keripy does not allowed it.
Yes, this is definitely possible and likely just a shortcoming of the `kli` and REST endpoints
Would be a nice PR to submit though :wink:
rodolfo.miranda
PR submitted
As part of the discussion here: we have renamed master to main on keriox: the following should catch you up:
git branch -m master main
git fetch origin
git branch -u origin/main main
git remote set-head origin -a
As per <@U0448S72CQN>’s suggestion we did not add a dev branch.
I just posted a note in <#C046U2MAYLE|iiw-nov-2022> regarding `did:keri` . It is about a session I'm planning for this week at IIW, but thought others might be interested on the resolve reference implementation I will be talking about
Is the KERI meeting on today or canceled due to IIW?
Cancelled due to IIW
Thank you Kent.
Could one the IIW35 participants post the link to the session minutes page? Is this resource publicly accessible?
joseph.l.hunsaker
Another silo ID system, tied the ETH blockchain. “Privacy first”, yes, and security last? I am not impressed by this marketing writeup and share the concerns of the critics.
‘ “While (Polygon ID) uses biometric information for verification purposes, that data isn’t provided to anyone unless the user gives explicit permission,” he said ’ There we go. Biometric data should be encrypted by priv keys the subject controls. Any other mechanism is subject to fraud and deceit.
It will be soon I believe
robert.mitwicki
Due to recent activities in keri community and some uncertainties have arisen in regards of the license of Human Colossus Foundation implementation would like to provide you some insights. As you all know HCF have very specific strategy regarding protecting the community and the implementation which we are working on. We invested already quite some resources and money into what we have build. Our vision could be described as: building with community for the community. Based on that we did quite in deep evaluation of different licensing options and we ended up (in our opinion) with one of the best option which is EUPL1.2. I know that a lot of people do not have a good understanding of that part and sometimes misinterpret the consequences of the license. This is why we prepared short summary of our reasoning and argumentation why we pick EUPL and why it matters for us. Here you can find it: I would love to here everyone take on that and prepare questions if you have any. As you know as HCF we are working on many components of KERI ecosystem and we are pretty advance in it, including keriox, cesrox and said all that is already separated, and is used in multiple projects. Obviously we don't mind if that work would be duplicated because you would find a good reason to not go with EUPL license. But as far as I know there is no reason to do so. And if someone would be able to prove benefits of other license for those components which fulfills our demands we are willing to change that license for benefits of community. Until that would happen, we would protect our community investment in a way as described above. I already requested to have a slot on the KERI call to provide some more insights and hear everyone opinion in that regards and eventually address some of those questions. If everything goes right we would have a chance to speak on next KERI Call but before hand feel free to provide your feedback and post questions in this thread. Best regards Head of Technology Council at Human Colossus Foundation
CESR for 1st year wizard (notes from IIW presentation):
Is the best start for building an authentication system and authorization system using KERI? I am mapping the KERI protocol onto a traditional sign-in and authorization model. It seems that the sign-in would be replaced with a profile acknowledgment based on a stored credential in some local browser storage that then personalized a user’s experience. This requires a web wallet or web data satchel (help me name this). Do we have any examples of storing credentials or keypairs in indexeddb? I know it’s not the best solution though I’m just doing a POC prototype. If there are no examples then it seems all I need to do for a web is the following: 1. The ability to create, store, and recover a keypair, inception and rotation, 2. The ability to send messages to witnesses and store receipts 3. The ability to view stored credentials and 4. to send existing credentials (graduated disclosure) 5. The ability to issue new credentials 6. The ability to sign events, or respond to signing challenges. I’m still reading through the SKWA document so maybe it covers many of these points.
The vision for SKWA is much lighter on the client side than you have expressed. The idea is to only provide key generation and event signing in the client. _Everything_ else stays in the agent.
<@U013F46SDRR> Thanks the write up and clarifications. I am not sure what ‘activities and uncertainties’ you are referring to, but, I certainly have made numerous references to the license change as a reason keriox ‘exists’ under web of trust, but have always qualified it with “the HCF guys are so awesome, they donated a version before the license change”. purely for clarification of why that repository exists at all. I’m not sure of other peoples intentions for keriox, but the notion of cesrox came about after the last IIW where CESR was identified as a source of "this is hard to adopt" - so we used the keriox you contributed under the apache2 license to start cesrox (this IIW) we're currently in the process of completing changes for the "neo" changes and subsequently the indexer changes in keripy, I'll push Phil and I's branch - which while incomplete shows progress to being parallel with keripy in terms of cesr. Something that the upstream keriox impls would want to do as well.
Got it, thank you. Once I finish SKWA I’ll think about this all and circle back.
We've put a lot of thought into SKWA and written some things down that we haven't had a chance to review thoroughly at the dev meetings. Perhaps a conversation about how we envisioned it might work could provide some insight before you finished it.
(neo is changes to the way we do padding and will be evident in the PR to cesrox)
I'm very interested in that conversation.
robert.mitwicki
By ‘activities and uncertainties’ I mean that people are not sure how the know-how brought by HCF can be reused and how we can collaborate since we under another license. A lot of people treat EUPL as GPL which leads to wrong assumption. Yes we (and Jolocom) contributed keriox to weboftrust as the work started on Apache 2 and we didn't wanted just simply pull that from DIF. We decided to switch to EUPL for better protection of the community behind it. What was already done stays under that license we cannot withdraw it or at least won't be nice to try to hid it anyway. I am fine with two project under different license. We have strong arguments why we chose the EUPL - mainly to protect the community and avoid situation that private company would pick it and extend it without sharing back, causing forks and split of the community or even the protocols. It is beneficial for whole community if we would continue use EUPL. But as I said it is our world view and complete understand if other would take thoughtful decision based on facts and not just false assumption. We are investing quite a lot into the development of those components and I bet that other community members would like to do that as well. Doing it twice it is just a bit of wast resources especially if there would not be any good argument for that. Especially that we are almost done with cesrox as a separate package.
<@U013F46SDRR> Thank you for the explanation of EUPL, and especially the link to the full strategy write-up at the Foundation. This helps me with some of my previous reservations, and I see EUPL as a valid approach. I might be willing to support the development under that license. — though I personally prefer the most-permissive and least-wordy licence. There is space for more permissive licensing. We’ve learned from GPL & LGPL that people will shy away from viral & restrictive licenses which hurts adoption; that doc makes the point that even Google has mistakenly marked this license as unusable … and although it’s a mistake, it is a barrier. Honestly, I think SQLite has done the right thing with ; sure, it may not force the community to contribute back, but it’s clear… and I see the threat to adoption & collaboration (whether by mistake or by malice) as the bigger threat, higher than the threat of someone failing to give back. But I’m now more willing to follow the group if they want to build on THC code.
rodolfo.miranda
First time I heard of SKWA. Just scanned the doc and catch my attention!! When coding starts :grinning::grinning:?
rodolfo.miranda
What's the relationship between SKWA and signify?
robert.mitwicki
Completely agree that lengthy and complicated licenses does not help regular users to jump on board. But keep in mind that in many cases licenses like that does not have any grounding in the real law. For individual person maybe that is not very important but when you deal with businesses or governmental services you can't simply skip that bit. Or define "do good not evil" - good for whom, what means good?. If you face serious business and face their legal department you would bounce of it relatively quickly. Community needs support to navigate through this space anyway. This is why we team up with lawyers to make sure that we get it right and we are able to protect our community in the right way. Through our experience we went through many open source communities and initiatives and personal must say that I am feed up with the direction which this takes. "Open Source" become marketing tool, nobody think about collaboration and community, Big tech companies used that to warm up their image. It lead to a trap called "platforms" and providing free development force for those who actually can monetize specific technology. You use open source and think that your are independent because you have control over code but you are under control of corporate machine. One of many examples is android as OS which is great open source OS but actually without using google services this operating system is useless and not user friendly, You can NOT have simple push notification in your own "open source" app running on open source service, using open source backend without google having stake in that. Decentralization is not only about technology it is as well about governance and distribution of power. Letting everyone do what ever they want (more permissive licensing) is not always good for what we do.
Signify is the new name we came up with
rodolfo.miranda
Catchy!! so, codebase to follow is signify-ts, right? I'll be dedicating time to it, probably after new year.
Same here Rodolfo! It’s going to be fun to work together
rodolfo.miranda
Is there a specific message for a witness to tell the agent that its ley events won’t be witnessed?
I haven’t heard of this though I’m looking it up. Will you ask this on Stackoverflow? The KERI tag has no questions:
image14.png
Let’s get that tag started! I’ll post my answer here and there.
A message is the closest kind of event I could find that might serve this purpose. Nothing else jumps out at me yet. It would seem odd that a witness would refuse to witness a controller’s key events given that the controller can select its own witnesses. The following quote from section 10.2 *Indirect Replay Mode* of the white paper illustrates this fact: > _The reliance on a designated set of witnesses provides several advantages. The first is that the_ > _identifier’s trust basis is not locked to any given witness or set of witnesses but may be transferred_ > _at the controller’s choosing._
So, given that a controller selects its own witnesses it is the responsibility of the controller to validate the suitability of each witness prior to including that witness in an inception or rotation event as a declared witness. As a part of the witness evaluation process I could see value in adding a protocol-level event for a given KERI agent (witness, watcher, juror, judge) to indicate it is not able to or will refuse to be a witness for a given controller, though this delves into business-process specific decisions that are likely bespoke to a given implementation and thus don’t yet make sense to add to KERI itself.
daniel.hardman
<@U024KC347B4> or <@U024CJMG22J>: is there a standard place where the OOBIs of GLEIF's new production witnesses and its internal and external GAR are published?
daniel.hardman
You gave me some of these values once, but I don't think I have all of them, and I would like to look them up from their canonical location rather than from a slack transcript.
rodolfo.miranda
I asked because a witness may stop responding and will be up to the controlled to check why. However the witness may have valid reasons that can be communicated directly (lack of resources, unpaid service, etc)
rodolfo.miranda
for example, are the GLEIF production witnesses willing to accept KEL from any agent? what about a spasm?
GLEIF Root AID is published at the following well known OOBIs:


We will likewise publish the OOBIs of GLEIF External and GLEIF Internal once they are created
You may well find this interesting They are looking for Keri as DID implement "DID Keri Prototype of a potential identity system for Radicle"
For the purposes of SKWA what is the definition of control? Is it having physical possession of and root access to a device? Here’s the statement for reference: > The server may be deemed somewhat secure if it was in the client’s control at boot-up but may notbe continuously in the client’s control through out its run time. I’m trying to understand whether I introduce unacceptable security problems by having a SKWA client be on a mobile device and a SKWA-compatible KERI full node in a cloud server I don’t physically control.
can some one post the recording from our last KERI meeting here
daniel.hardman
: I would like to invite members of the KERI community to a special presentation/proposal about how to bring several different community efforts together -- folks working on DIDComm, as well as Hyperledger Aries, Trust over IP, Decentralized Web Nodes, KERI. The meeting will be this coming Monday, Jan 9 at 9 pm Central Europe, 3 pm Eastern, 1 pm Mountain, noon Pacific. It is on a Zoom bridge owned by DIF, but it is allocated to the DIDComm User Group rather than the DIDComm Working Group. This means the call is open to everyone, not just DIF members. Please come, and please spread the word. The Zoom link for this meeting is: .
cool, see you there.
rodolfo.miranda
great, great, great!!
I’ll be there
andreialexandru98
Excited for this!
andreialexandru98
Feels like all of the avengers are coming together!
assemble!!!
"Let's kill him properly this time"... maybe that's the wrong Thor quote, but count me in!
*KERI community meeting recordings?* - noticing that recent recordings are not posted in the . Is there a location where they can be accessed or does it require someone with zoom recording access to post the recording link in the agenda post meeting?
Is there a recording?
There will be, published by the DIDComm User's Group
Sorry I missed today's call. Was it discussed there too?
Yes, there are notes in the agenda about it
Recording for the GUT (Grand Unified Theory of Trust) presentation by Dan Hardman
rodolfo.miranda
Recording: The most recent recordings DIDComm User Group
Hello all, I'm first time here, glad to see familiar faces from didcomm community here too
rodolfo.miranda
Welcome Pavel!!
robert.mitwicki
thanks <@U03R27M4FM3> I missed the call but watched the recording and I am missing some argumentation on the main point which I believe triggered the idea for GUT, if I may ask <@U03RLLP2CR5> why you think that there is no proper mindset in place in those communities to be collaborative? Do you have specific examples? From perspective of HCF (and HCF is part of few of those communities DIF, KERI, ToIP, Aries) we found it very collaborative, and lack of adoption of those specific technologies is more a problem of specific use cases or lack of time to contribute (at least in our view). Not sure how the additional umbrella can help here, wouldn't that be additional overhead? Happy to learn about what obstacles you encounter to achieve what you presented about GUT. It reminds me a bit the situation from roughly 3 years ago where we ended up in similar position and we tried to tied governance, identity and data space, to have holistic framework for digital economy and we tried with Aries, DIF and later at ToIP but at the end we found that each of the community is lead by different needs and was hard to enforce holistic view. This is how HCF was created. So far we where able to unify all above into one stack and linking all technologies created in different place (contributing in those places in the same time) into what you would call GUT. If you didn't I recommend to read introduction to DDE Stack: We continue sharing and promoting our achievements in all mentioned communities but we know how hard is to break the ice in a way to think about certain things and hard to enforce it. Based on our experience just following Unix philosophy is more then enough and helps with adoption. This is how we were able to introduce OCA into Aries ecosystem and same way we are building keri components (cesr, sai, tel and more). Happy to hear your experience and understand the reasoning.
The zoom ilink for ACDC meetings has changed for 2023 The new zoom link (changed for 2023 by ToIP) is
andreialexandru98
Might be a stupid question but can you have a decentralized web node act as a keri backer?
Seems like a good question to me!
I don't know enough about decentralized web nodes to answer that. I'll have a look
rodolfo.miranda
The Registrar backer stores the witnesses KEL in a Registry that by itself provides a secondary root of trust. Ledgers are a well know fit because they provide some guarantees on the ordering of the events, timestamp, etc. Of course, that trust came from the trustfulness of the ledger, how secure and how descentralizad it is for example. Can WDN be considered as a secondary root of trust? who controls them?
andreialexandru98
A did controls a dwn
rodolfo.miranda
can the DID replace data as its wish? can it fork the kel and show a new branch for example without being noticed?
andreialexandru98
It depends how one would define the protocols
andreialexandru98
I am sure it can be written as a write only protocol
andreialexandru98
The underlying data cannot be altered by the owner as far as I understand as there’s a hashed state concept that would make the nodes not in sync
<@U03P53FCYB1> writes: ‘Ledgers are a well know fit because they provide some guarantees on the ordering of the events, timestamp, etc.’ -> My take would be that the witness KELs themselves have the ordering in control by the sequence numbers of the individual events and the ‘first seen, can’t unsee’ principle? I might be missing something. Please let me know.
rodolfo.miranda
Yes, the "first seen" principle is key in backers. They write to a ledger that can not be altered (supposedly), so the duplicity problem by forking the KEL is hard to achieve. They may also provide a discovery mechanism to resolve AIDs in a given ecosystem.
Yes, that’s a good one. When I heard Daniel asking questions about batching verification requests I got the idea of a “KERI/ACDC explorer”
What does challenge and response provide? I’m looking specifically at the `challenge.sh` script. I see OOBI generation and resolution and then challenging. I understand why OOBIs are needed for discovery purposes. What purpose does challenge and response serve?
Multifactor Auth
thanks
you provide an OOBI and through OOBI resolution you get end-verifiable proof of current key state. Then in a live session you challenge the controller who signs the challenge phrase proving control of the private keys.
I have been tasked with doing presentations to a US government agency (that will remain unnamed) for the purpose of teaching them about the vLEi ecosystem (KERI, CESR, ACDC, etc). I was given a list of questions to answer, many of which started with "why". While those "why" questions have been the most interesting to answer (as well as the ones that cost me the most sleep), my favorite question to answer was "How does KERI support cryptographic agility, specifically how could it use new algorithms to provide post-quantum security?" I believe Falcon was specifically mentioned as part of that question. Tomorrow morning I will answer that question and with the help of Sam (of course) I was able to create a KERI inception event using Falcon-512 to generate public and next keys and sign the inception event. I did this using the Open Quantum Safe library () with the python bindings () integrated into KERIpy. Here is the inception event (with a hefty public key):
{
  "v": "KERI10JSON0005af_",
  "t": "icp",
  "d": "EI8G61z6s5c6-jrDSQ36eIx5wt-VV5T_Z_YMoZ6EyRKq",
  "i": "EI8G61z6s5c6-jrDSQ36eIx5wt-VV5T_Z_YMoZ6EyRKq",
  "s": "0",
  "kt": "1",
  "k": [
    "1AAKCYRxBesmbvgHwYeT2UuIHPlmYm0QTjoR_hiFKyKiSFyrdiELXrijOTJgWHAGWmpxChBdabDlKK_fFRwS4WBoHJpBZfBGblklAu304orZh42bUoTnRzDqcpkcYZwMWJhx3cmoicB29c3hShf2cNnak9tLMDMb2NgeoAxu-senZ0gV13U6zCgdsAzY3kR9ZIWMWCWA7Wb4vfW7HBqtwNFpqYyXSfLj6zsWhqYmM6rhgaorRgD0JdunibrZhrie96TY7TWQSfmfqWT5rMCSlzzOh3fGGRn1wTLlFoV4JkJYGOi9jDFX45gNO67--GVPS51tlWpnaZqzYqq3hu5WQgkdtOKyYkmWmrK1xQ-Wmj4JkzrmZmj4-zURxySRSvnWO6_KMyIZp70q7AxMHMF3LJ2QHsm1vQ8APPzPhgmdO7lm-AAILPaTylKbuB97KPZM5XsUy0Kcg7MDL8ygqqqBh9D3FGlUOsXj3HtwtwTLAuAy8CPyeufUGzGz5diV_hnsuYpNgJ0UZRt4YIZGMsV209pBlIT1FJqwULe_oVJrr9uyi5KKERXocXZtoGcKryxlNp6wzZKxZ7z9lyCpTEuyeYi5TU-YKlP3ndaCcf7H1GEXhoBo6ml0klU3gOdgzm9LmtgGKyiIwBPuaaJG6jEMcFHQOkQiA90PqzzQHJNEqiMzz9itIYQ69YpmcOFbNQHaowUF9QvlJgxgGB2KBkK1hyWz55ZGrDCUHZm1ERxzrZxT3KtDJN5otelr6DJ6r-MGGAqpxiMx32X3MDJQQ2qOzl3k1UM0Ya60FgMBSmXHjeBPkEaILmUgdCH8GcqCGEyWQ0XHvN5G8ZIF5QyghtSM7VYugB8IGtdlqQLPwShANwCRIWXz3bm04RcElRoLljy0qpoYDRUaI8r83M5AvLrHkO96shjWwgTeTbpcwNMKLXMQyDCEVo-yk7hGYE1iABIxqSvL2bM_JZ2QHcV9D6qB-DUXvW5c0eJanGVqxes1h9xNmnHS4mJQmU2paJlvGpAc4buYvJgeocbK1xMh-qYh51eIC-ZQ0LnVNs8QmDox0lzqkXblFpLFK92eQMKBYGEkRMa9E_2duwuyWRGMYxQxcbmB6m2yRoJpZG4lcbxOSWZnh-GM2WD1EtTXH3VL2EULN0cIFwlEKupNYpulUGs-XJRDNC1iPXNxTqXeSxeSsQKivgaz"
  ],
  "nt": "1",
  "n": [
    "EL9JL--iC82ELBpXMmMBf9JdphWND3l6w7orLdLRvjzk"
  ],
  "bt": "0",
  "b": [],
  "c": [],
  "a": []
}
And here is the signature:
-AAB1AALOTQGtYrp3F7h8ua7BL0bOwDiB6nlxJ7Q4GEYsIR4jv_2VxCILI4BbQ6-v8hSXuyVaU2RlFmFv6BNcMGumxUFyKfwTFI_1HtkKgGo1TmQdAW_nZK4BnUpMLK6vTM3-shGkt77usooC0ZxGNxOl9sthgltg2AEEsyI6nGyAUtZ7kz0OL_s4FfVXTDPtisMiXoVcotRRQpejh-9Hx16Zaxutg-ZK2htbrwvwmJoMIQHPeaby9Iyu5_UMs2v4nbX52b5iza6eZmcR1mhX2p4iV55ojgpCrTmrSbJmMYfAQMNOIHkT8knjrvldxTXlKSOL5BneTfKtBZeRAKI2MT3iQERZo78bQtaUVIbf42utUx70YyAaAST_numnLjL2_63r73_6ZhJLEvgjItLVjeKFonoUPSaW2UTnfJJ1gddlTlqtgkzQXZRcjpEL85OMNgk8Bf9no5iWQwC5cpM99Z1kox138gF2ZbuQe1ERKk-bjqrG2LO8cdzpOwPWcROsQRzZtInvuxWJb5tZKwyT-ghkzZQXKsQLjmGxyKYJGKg4BLjF6jEaBUaOUGta4p-hrODesiRWKMWuawTMYP3U_TyIzHaqKRY9j1Gs61NnEzeuCpmtYnA5IpTOUC5593YIzs1_rQp5inGJ3sktcKLkIyaLwHHX6pIm31MXzk_jHxw-Tii_MZJ-hbeUxf2Z9R2u1MlwiRlnd3PoYxXT9s_HY9alpNICmZwtgau3qtZ4ilPBXp6lfMiWdFVAlWbkE-amAq5Q710-VadkTFB-rC5i2FxyeskmQLfK-qyEljF6pCzIWmixsy1baUr-qZCUmWL_vpF6sifx3kScUkUUOPamixkQfvP-Or4rKq9JopmCUYx5ObLemi1Ru63wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
I will push my branch to WebOfTrust/keripy with the new (experimental) derivation codes and use of `liboqs` and python bindings to create keys and sign the event.
One of the "why" questions I have to answer is "why did Sam create something new (CESR) instead of using what we already have (CBOR, COSE, Multi-Codec)?" The answer is very clear to me now. We believe that all data in the future will need to be authentic data and therefore, cryptographic primitives (AIDs, key digests, signatures) will be included with every bit of data transmitted and stored across the internet. Those primitives must be represented as strings and not data structures to ensure we are able to ship the most compact form of those primitives. CESR is the only encoding format that enables us to ship cryptographic primitives as first class encoded strings that are more compact than CBOR structures of the same data and even more compact when shipped as binary.
When you look at the size of that public key and signature that we will have to use to be post-quantum secure you can see why this is so important.
Phil this is great for me to explain to people to help them understand why all this stuff is so important. This was a point I was missing when I described why we are going in the direction we are. Really appreciate it. It was the same argument, really, when I worked at Certicom. We were into ECC when everyone was using RSA, and additionally our implementations were so optimized for specific platforms that they were able to operate in constrained environments where others weren't. Decentralized identity is going to need to be deployed in a similar fashion, it isn't all laptops and mobile phones.
I know that adopts KERI as its core key-management technology. However, are there any other commercial services that also use KERI as its key management technology?
Not too many yet, maybe have a look at what Provenant’s doing.
Last ACDC meeting, near the end, Daniel asked a question and I came up with a potential solution to it by concatenating two identifiers to form a single one. The idea was that a namespace would be hashed to provide the first part of the identifier, and a complete data structure to provide the rest. I’ll now give a complete description that I believe will create an auto-governing system, potentially addressing Phil’s concern. Sorry for not using the typical one letter identifiers, I don’t know them off the top so I’m going to be verbose so that I can communicate clearly.
{
  namespace: {
   digest: "E...",
   name: "xyz.qui.domain.identifier",
   issuer: "did:keri:E.."
  }
  digest: "E...",
  version: "0.1.0",
  // other data
}


If we combine (`namespace.digest` | `digest`), we can create a fixed-length searchable (by digest of issuer and name) versioning system where only the identified issuer can create versions of their data. There is no administration required, I believe. The trick is that `issuer` is an input to `namespace.digest`, and `namespace.digest` is an input to `digest` (we could use a compact expression of this information). The caveats:
• to allow this data to enter such a system, one needs it to be signed by the issuer
• to prevent abuse, one needs to ensure that one has the ability to throttle/cap by issuer id (unless, I suppose, resources are unconstrained?)
• consumers would be pulling _all_ versions for a given `name`/`issuer` pair
Is this new/useful/flawed?
What would prevent another issuer to register the exact same name? How then would someone tell which is the correct hash? I feel like namespacing is for human readability to bridge between what a human sees and understanding it’s the correct issuer (for example). So I’m not sure what digesting the namespace buys us here. Namespace is a reputation issue and therefore should be resolved by trust anchors, not any central registry.
The issuer id bound to their key and the sig on the message being accepted?
You couldn’t insert info that you didn’t control? Maybe I’m wrong
Or maybe I misunderstand intent. But I was proposing the first digest be on not just the namespace but also the issuer id, making the digest bound to the sig the issuer can create. I understand if someone got a copy of an already structured and signed doc that the issuer chose not to insert, they could insert it. Is that what you mean?
And you are right, I was assuming that the identifier even being present in the system means that it had been somehow validated
But once that is established, the system I propose may still have a purpose?
(Validated meaning there is some signed chain going back to a root that validates the issuer as a member)
Probably abusing terms. Anyway sorry for the noise, I thought maybe I was on to something cool.
A small _side step_: Validate, verify, validator en verifier are terms we might have to reiterate now and again. In our domain (KERI/ACDC) we seem to be able to work within the broader eSSIF-lab definitions. Mind you, the __ definition is very general, in the KERI/ACDC vocabulary ‘validate’ currently has diverse meanings including this one. (general) **, of determining whether or not that data is valid to be used for some specific purpose(s) of that **. (general) **, of determining whether that data is *authentic* (i.e. originates from the ** that authored it), timely (i.e. has not expired), and conforms to other specifications that apply to its structure. In the KERI/ACDC domain we have come here: If your border case / criterium is authenticity, as one could understand from “Validated meaning there is some signed chain going back to a root that validates the issuer as a member”, then first part of the sentence is what a verifier in KERI/ACDC in a narrow sense does, the second part ‘validate as a member’ is more a validator’s role. Example: You keypass of the hotel might verify, the doorman might still not validate you as a guest allowed inside (validation). It’s all about what we as a group collectively understand how individual members look at the criteria to draw lines between closely related terms in our terminology, even if borders between certain terms are vague. At the same time it would come in very handy if the outside world, outside of KERI/ACDC would still know what we’re talking about. “I first have to try to understand what you mean, and then we might enjoy the bliss of reversal.” I pass the baton back to Phil / Jason.
Right, and that was why I used the word validated - because it wasn't just a verification that was required to ensure participation allowance in the system, and I guess this is what Phil was talking about - that gating of participation makes the system still dependent on some type of business level decision. I actually take great issue with people saying 'hash' when they mean 'digest'. A hash is a function that produces a digest. They aren't technically the same thing (or never were), but everyone uses them interchangeably.
What I mean is - some business decision, in my scenario, is involved in admitting the participant in the first place.
robert.mitwicki
I know about few in telecom, supply chain, IoT and data processing, KERI is great as a base for creating decentralized governance so anywhere where you deal with diverse ecosystem it would fit perfectly. Companies starting adopting this technology after realizing dead end with their blockchain pilots. At Human Colossus Foundation we created for that purpose Colossi Network which is a network of private companies adopting and implementing technologies of Dynamic Data Economy (one of them is KERI).
joseph.l.hunsaker
joseph.l.hunsaker
Anyone familiar with “Space ID”?
YACIP?
Yet Another Centralized Identity Provider?
“On SPACE ID, Allen’s username, :Allen, can be linked with his identity across multiple blockchains. You’ll be :Allen on BNB, :Allen on Ethereum, and :Allen on Cosmos.” YAMM -> “Yet Another Middle Man”
Whatever you do don’t tell their investors about autonomic namespaces. Someone might short their token and cause a premature liquidity crisis.
YAA-> Yet Another Adam.
daniel.hardman
Folks, I have looked diligently for the recording of the meeting that was held a few weeks ago, wherein the naming conventions in keripy were explained. I was told that it was an ACDC meeting, so I walked through the ToIP wiki. I found all the agendas, but no links to recordings at all. Perhaps I'm just lacking in insider knowledge about how to find ToIP recordings? I also looked at the keripy and keri repos, hoping to find meeting recording links. I found in the KERI agendas a reference that I think is to this content. It's in meeting notes (captured by <@U024CJMG22J>?) for the KERI meeting on Jan 24 (). The agenda says at the top, "No recording made." Then, three lines below that, it says, "English semantic naming by <@U013SSREL0H> (see recording)". I don't know if this is the meeting that I need, but it sounds like it? So: does any recording in fact exist? If not, are there any other materials that I could consume that would help me learn the same concept?
rodolfo.miranda
The naming convention was explained on the edu meeting.
rodolfo.miranda
let me find the link: "English Semantic Naming Conventions for code readability by Sam Smith"
rodolfo.miranda
daniel.hardman
THANK YOU for the link, <@U03P53FCYB1>. This link requires a passcode. Can you tell me where to find that?
rodolfo.miranda
Actually, it was a CESR call, Passcode: +0cL#2DW
<@U03RLLP2CR5> here : are all the CESRide meeting notes & Zoom links. In fact CESR and Edu are the regular meetings that have their recording sorted out pretty well and in a timely manner, but we might need recordings the most for KERI/ACDC…. Imo it’s a recurring problem with KERI and ACDC meeting, the availability of *recordings*… Not only at WebOfTrust (KERI) but also at ToIP (ACDC). I wanted to take the lead in assuring all recordings, both KERI and ACDC are uploaded and linked in their meeting notes and here in Slack only 15 minutes the latest, after a meeting has taken place. This is what I strive for and _it’s really needed_ to serve newcomers, people that want to educate themselves and people that have missed a meeting. To make a long story short: I stopped asking for / begging for recordings or access so I currently can’t do it.
Sorry if this is the wrong channel, but I have a question about KERI messages, and specifically the interpretation of the `kt` (key signing threshold) field. In some instances, the value of this field is a list of strings (representing fractionally weighted thresholds), and in others, it is a string, representing an P-of-Q condition, where P is the integer value in the field and Q is the length of the `k` (list of signing keys) field. In terms of an interpretation of these two different value types, would it be correct to assert the following equivalence? `"kt": "N"` is the same as `"kt": [ "1/N", "1/N", ... ]` where `len(kt) == len(k)` and for all `0 <= i < len(kt)` , `kt[i] == "1/N"`
Yes, a single number threshold that is less than the total number of keys is a simpler expression of an equivalent fractionally weighted threshold.
Why less than? Wouldn't equal work out the same? For example, 3 keys, [1/3, 1/3, 1/3] or are you just saying there is some implicit rule that when the `kt` is omitted it simply requires all? I am foggy on these details as I read the paper in November.
Yeah, sure
I guess my next question would be - is the allowance for the single integer in the evenly distributed case simply to create a minimally-sized encoding? It complicates parsing (nothing unmanageable obviously), but I suppose simplifying that is not the goal, right?
Correct, a more compact representation
joseph.l.hunsaker
I pitched Keri (SSI in general) to the R&D team at Telenav (where I work) today. If I can get my company to do a POC, I will finally have time to actually contribute to Keri. Unfortunately I am still in the “mercenary” stage of open source contribution.
Also:
This message was deleted.
<@U013F46SDRR> <@U02N7K951DW> and <@U02MLCPJ85A> are your people for keriox
robert.mitwicki
Yes they should produce same outcome, we doing our based to keep the implementation up to date with specification. if you have some questions feel free to shoot it here.
robert.mitwicki
I created <#C04PAL1NAHJ|keri-rs> channel for any discussion related with rust implementation of any components in keri ecosystem, so we don't need to spam other channels.
was going to recommend you check out <#C04FMH1UJGM|licencing>, actually. all good questions. the calls are a great way to understand what's going on, and the meeting notes.
KERI is a protocol developed by Dr Sam Smith, keripy is the reference implementation of that protocol. There is a white paper and draft IETF specification along with it. CESR, ACDC, PTEL, SAID are all derivatives that have been extracted from, or developed from the original white paper by Sam or <@U024CJMG22J> primarily, it’s open source so obviously there are other contributions. There is no one over arching organization, that’s the great thing, we get contributions from a variety of companies. <@U04HQD29Z7E> has a good point, there was a split over the introduction of a European copy left license for a variety of reasons. Sams original idea was to use IETF to harbor specifications and Apache2 to foster implementations. His reasoning he documented in the WebOfTrust/keri readme I believe. Welcome! Hopefully we can answer any questions you have.
it depends on your goals, and maybe constraints. the THC `keriox` implementation is EUPL I believe, not Apache2.
there is no Rust implementation under Apache2 as far as I know
But, there are primitive libraries
`cesride` and `parside` which are being actively developed
There is, it is keriox under WoT - it was donated to WebOfTrust by THC before the split, but we have since archived it.
under Apache2
Oh my misunderstanding thanks Kevin
WoT is newer I think
Maybe divergent
I get most information from calls
But this is interesting (didn't realized the DIF one existed)
DIF is irrelevant
:+1:
if you are interested in helping or learning more I really recommend one of the calls
We moved from DIF a year, maybe two ago, the meaningful work is at THC/WoT
There was a disagreement over asserted beliefs of ownership of intellectual property.
i realized upon a re-read that it may appear there is tone that implies not to ask more questions, but that was not my intent. please ask if you have questions, <@U04PXR871PS>
We have started from scratch with cesr/parside, the short version is: CESR is a sub spec of keri and is often deemed the “hard bit” to implement making adoption harder. We decided to start from scratch and model closer to keripy (which fully implements cesr) than what was previously out there. As Sam continues to develop cesr, keripy will likely remain the spearhead of development, libraries downstream (in my opinion) will benefit from being closer to keripy (harboring a common vernacular etc)
Do not discount the THC work though if you’re looking at rust, they have a very comprehensive write up on their license and why. So again it depends on what you’re doing. cesride is designed to be purely the primitives required for cesr, that might not be what you want right off the bat. Parside will consume cesride and represent the parsing module of keripy
We feel that is a good basis for further development
But Jason said it, meetings are a good place :)
<@U013F46SDRR> and others, we just finished a meeting where a group is considering SSI adoption in an IoT use case where key rotation, performance, scale, management are all important (variety of devices and want to use similar identifiers for people as well as devices, etc). Any help/tips/pointers/pros/cons of using KERI for IoT… and even comparison between KERI and DID based stacks… and of course comparison between KERI and traditional Certificat Authority mechanisms are all welcome. <@U03RLLP2CR5> presentation is helpful . I also recently found this video about DIDs/IoT anchored to a blockchain
robert.mitwicki
We touched on that topic multiple times during various community calls, long story short you can't get better then that, means KERI architecture fits perfectly into that space (as many others) if you think about accuracy of information or even simply device management in divers ecosystem DKMS beats anything which you can compare with. Why because it is agnostic to the ecosystem means you can do it simple in centralized highly controlled ecosystem but as well implement it in highly divers and decentralized/federated as well. There are different technical solutions which can be applied depending on the use case based on that approach. IoT space is so vast that hard to just simply give short pros/cons. Since you didn't brought any specific "KERI vs xyz" or "KERI for zyx" I can share that it fits extremely well, and we start seeing some commercial adoption in that space at scale (really big scale and very sensitive). If you are interested in more details I will be presenting that topic on Identity Week in Amsterdam (June 14th) (I have few limited VIP tickets to distribute for that event, those who will prove they are worthy :wink: happy to share). We also provide consulting services where we can roll out in details pros and cons tailored for interested use case. P.S. decentralized IAM based on blockchain is not decentralized - I don't think that needs to be explained in that group :wink: and if you want to compare it to DID, very high level description can be found here: DIDs are not secure (with few exception) this is why KERI come to existence to solve that problem.
nuttawut.kongsuwan
I am reading the KERI white paper and came across the fact that KEL is considered a secondary root of trust (Page 60/141, Section 7.35). In my understanding (which seems to be wrong), the controller of a transferrable AID—where the inception key is already rotated—needs both the current signing key and a KEL to cryptographically prove their control authority over the AID. Hence, my previous understanding was that the AID together with its KEL acts collectively as the primary root of trust. I would really appreciate it if someone is willing to clarify this point.
Screen Shot 2566-02-19 at 09.16.13.png
It is important to remember that KERI's security is based on the layering of thresholds. The whole is greater than the sum of it's parts. So while it seems that with a transferable identifier (one that can be rotated), the security of the identifier starts with its KEL, it does not. As with all identifier types (transferable and non-transferable), the security starts with entropy. The randomness used to create the private key is the absolute root of trust. Once you create the private key you retain the ability to prove control over your identifier by your ability to use the private key. With non-transferable identifiers, it ends there. With transferable identifiers, the next root of trust (and thus a secondary root of trust) is the KEL which provides the ability to rotate the keys of the identifier. Additional roots-of-trust layered on top of entropy and KELs include multi-sig support, delegation and witnesses.
nuttawut.kongsuwan
That clarifies it. Thank you!
Added to the Q-and-A, that’ll appear on the education site soon.
Screenshot 2023-02-19 at 09.46.08.png
In every cryptographic system entropy rules, and if you don't use a good rng (a csprng) you're boned from the start. This is really important for everyone in the space to understand. I knew a guy who was really into measuring hamming weights of outputs for everything he did as a crude way of ensuring sufficient entropy in the things he built.
rodolfo.miranda
I mostly used random generators that came from main crypto libraries. How can we know if there are good or not. Is there a list of libraries that we should consider using in out projects, or a list of the ones that we should never use?
Generally you want to use the one supplied by the operating system. In linux this used to be `/dev/urandom` I believe but there was another one that wasn't cryptographic (the tradeoff must be speed, I've never read about it though I have implemented a prng or two). In Rust you typically use `OsRng` from the random crate in fashion. In `cesride` we rely on two variants due to the expectations of `ed25519-dalek` and `k256`, and we simply pass them to the libraries to generate keys. You'd need to investigate how to get random bits in whatever language you are using, and just make sure what you are reading is about a `csprng` and not just an `rng` or `prng`. This looks like it has a lot of good info, though I'm a bit lazy about reading it today:
And, even a good `csprng` should be seeded well - if you can get random data from measuring user input or environmental conditions that can't be easily controlled you will generally be more resistant to attack. Think about the case for example, where a designer is tempted to use a radio receiver and noise to seed the RNG, but an attacker floods the space with a frequency that produces a deterministic result in the ADC or something.
rodolfo.miranda
interesting. Thanks! Often we take care of a lot of things, but not the bare basic ones
I'm questioning now whether `/dev/urandom` is actually an RNG or just a source of entropy, and used to seed an RNG. I won't be looking it up at the moment but don't want to mislead so read further if it is important.
Good discussion, I conclude that urandom is at least a PRNG. I’ve created this stuff: and entries in the Q-and-A.
Screenshot 2023-02-19 at 20.54.13.png
Here's a way to view entropy on the command line (should work in Linux and macOS):
cat /dev/urandom | od -tx1 -Ax
if you want a specific number of bytes, something like:
dd if=/dev/urandom bs=1 count=32 > entropy
cat entropy | od -tx1 -Ax
I'd also recommend trying out WSL2 in Windows if you are unfamiliar, for a more consistent dev experience across platforms - you should have `urandom` in there too.
In KERIpy we use libsodium through the python bindings. I've never actually looked at what their documentation says about randomness, but here it is: > The library provides a set of functions to generate unpredictable data, suitable for creating secret keys. > • On Windows systems, the `RtlGenRandom()` function is used. > • On OpenBSD and Bitrig, the `arc4random()` function is used. > • On recent FreeBSD and Linux kernels, the `getrandom` system call is used. > • On other Unices, the `/dev/urandom` device is used. Thought that was interesting based on this conversation.
I extended the answer with Philip’s input in the related Q&A entry
Information on *DNS & Identity* and *Jacques Latour* (CTO of CIRA - Canada Internet Registrar/Registration Authority) , who is advocating for leveraging current DNS tech/tech direction vs. attempting to replace it. Contact info: • @Jacques Latour (on ToIP Slack) • Some material provide by Jacques, his team and others: • Michael Palage () – • Digital Identity - Emerging Identifier Technologies - JL_CIRA ◦ Presentation at ICANN 75 (21 Sep 2022) ◦ Ask Jacques for a link
rodolfo.miranda
Yesterday I presented to the Aries WG the did:keri method as an alternative to did:peer. You can find the recording at the end of this (that recording, starting at min 21:20, can be also useful for those who weren’t at the KERI meeting when I presented the topic weeks ago since it was not recorded). The Aries WG has now the task to compare different candidates: did:peer:1, did:peer:2, did:keri, did:legacypeer and make a recomendation. Quote with what they are looking to compare:
- Verifying and document that the transformation can be made in each case — e.g. from unqualified DID and related DIDDoc to the new format — and what extra information and libraries would have to be defined/used for each.
- Create a spreadsheet of the properties that we get from each so that we have a comparison — size of DID, key rotation, ease of transformation, future path, multi-key support etc. Some will be subjective, but I think we have them in one place, it might be easier to see the differences.
I have a polite request I'd like to make. Our little KERI World community is growing, over the past couple of years we've incorporated a lot of new projects, CESR, ACDC, the entire vLEI ecosystem. We've seen production deployments and proof of concepts, we've done (more than I can count) presentations, we've had people attend conferences and represent this community. We're writing specifications and pushing for recognized standards for the work this community does. However, KERI World (this Slack) and the Web Of Trust repositories are not a traditional standards organization, there's no paid infrastructure. There are no full time people who support it, it is literally a community of volunteers working together under a banner of like-mindedness. This Slack is not Stack Overflow for KERI, we can't scale to that, there just is not enough of us. My polite request is this, introduce yourself. Introduce the work you're doing (so we can be excited about it with you!), if you haven't been to a KERI/ACDC/CESR call, do so, it's great to put faces to names and it is how our community grows closer and stronger together.
robert.mitwicki
For individuals with an interest in the domains of Internet of Things (IoT) and Supply Chain, I wish to draw attention to my upcoming seminar session on this subject at Identity Week Europe 2023 in Amsterdam. The session will take place on the 13th of June at 15:20, during which I will illustrate how advanced technologies such as KERI, ACDC, Microledger and OCA can be utilized to address issues in diverse, cross-jurisdictional domains, supported by concrete examples in cold chain and IoT sectors, underpinned by DDE stack. P.S. as mentioned in previous post still have few VIP tickets to distribute, PM me if you are interested.
I’m looking forward to this, thank you for letting us know <@U013F46SDRR>!
In this article of Christopher Allen, GLEIF is mentioned, as an example source of (undesirable) crypto agility. I can't see why, yet. What do you think?
Allen mentions GLEIF also in the rwot community channel. I've asked for clarification.
Clarification by Christopher that we got: “GLEIF appears to be adopting many of the design patterns of JOSE, which is an example of the perils of cryptographic agility.” Are we on the same page, or are we mixing up LEI and vLEI?
I believe he's simply referring to the fact that various cryptographic primitives can be used. Such as various hash methods or signatures. The danger of cryptographic agility is simply that libraries can be poorly implemented and not follow proper verification logic. Which was famously the case with JOSE/JWTs ( e.g. headers are ). Another gotcha is that people will use poor choices when configuring keys, selecting algorithms, etc. That's what `nacl` is helping to avoid. Or `paseto` tokens, etc. Lastly, it adds complexity to the processing logic and therefore potential bugs, including side channels. See TLS crypto suits ( e.g. downgrade attacks). From my standpoint ( and I'm not a cryptographer) KERI is using cryptographic agility justifiably: • new type or stronger key algos can be used in the future for transferable prefixes. • the set of primitives is relatively small and is easy to reason about.
I think so too. I still don’t understand why he explicit puts GLEIF in the spotlight for this; there are thousands of projects that meet those criteria (even better). Why GLEIF.
I guess it's because GLEIF is a high profile project which is also a first mover to adapt a brand new cryptography based tech ( cryptographers are usually highly suspicious of any new efforts in that space). And he's probably biased towards , which arguably targets the same space and challenges as KERI/ACDC. :)
I think Vasily is correct here. He is confusing KERI's support of cryptographic agility, which means support for multiple algorithms with the problems JOSE had because they allowed flexibility is selection of parameters for specific algorithms. The later allowed for downgrading security by making poor choices in the parameters.
<@U024CJMG22J> so my (personal) reply to Chris about his remark “GLEIF appears to be adopting many of the design patterns of JOSE, which is an example of the perils of cryptographic agility.” could be: According to the KERI/CESR/ACDC team behind the GLEIF vLEI implementation you might be confusing KERI’s support of cryptographic agility, which means support for multiple algorithms. The current KERI suite uses relatively dumb but seasoned cryptography. The problems JOSE had because they allowed flexibility is selection of parameters for specific algorithms. The latter allowed for downgrading security by making poor choices in the parameters. This is not the case at GLEIF nor in the KERI stack.
Yes Henk, that is spot on!
It is ironic that Christopher specifically mentions GLEIF as the vLEI EGF ties down the cryptographic algorithms that are allowed to ed25519 and Blake3, meaning the vLEI is even less agile than KERI itself.
Very uneducated article if you ask me
joseph.l.hunsaker
This may be of general interest: PolygonID “The future is self-sovereign”
Polygon ID Developer Use Cases -> “DAOs: Members often wish to remain anonymous online”. SKIP
If you write something like this, it unveils lack of understanding.
“MATIC is the currency of Polygon that enables users to interact with tens of thousands of dApps involved in our ecosystem. It is also used to secure the network by staking.” JASC : Just Another Shitcoin.
What does GAR stand for in RootGAR, ExtGAR, and IntGAR?
I suspect “AR” is similar for QAR and LAR as well.
[GLEIF | Qualified vLEI Issuer | Legal Entity] Authorized Representative
And RoOT needs this capitalization as it is a recursive acronym that stands for RoOT Of Official Trust
I am looking forward to Glossary items :slightly_smiling_face:
nuttawut.kongsuwan
They are defined here.
Thank you.
I think that’s why Dr Smith created KERI. It represents a best in class DKMI.
<@U04PXR871PS> a few months ago I came to the same conclusion
petteri.stenius
I believe there's a TSWG ACDC call today. Since North America has switched to daylight saving, could someone check at what time the call is scheduled today? I'd like to join as observer
Hi Petteri, the call started 45 minutes ago and is going on right now at:
petteri.stenius
Okay, I'll try join next time with better luck :slightly_smiling_face: Could you send me an invite to the ACDC slack channel?
<@U03U37DM125> keep an eye on this
Post your reply (vote) of where (which WoT repo) the KERI ecosystem *scrapbook* should belong. This scrapbook will capture the latest activities, vids, releases, guides, etc. in chronological order. Ideally it will be a markdown file and be available as a page on the web. Practically speaking it should highlight how active/vibrant the KERI/ACDC/GLEIF ecosystem is and helps people find the latest activity first. For instance at the top should be the link to the and a little further down would be Daniel’s recent . It would be somewhat similar to but a stream of latest activity so you get a sense of the age of the resource
or is the ecosystem ready for a social media account? :wink:
andreialexandru98
Came across this set of that talk about authentic data as well and it looks like a more specific implementation of keri using specific crypto primitives. How are Authentic Data Provenance Log different from key event logs?
andreialexandru98
the work also looks stale but the ideas look very very similar to keri
There is quite a history between the work in that repo and KERI. Best to leave that in the history books and just say that yes, that work appears to have been abandoned and KERI is under active development and improvement.
joseph.l.hunsaker
The Nostr protocol is getting a lot of steam. Jack Dorsey (former twitter CEO) has focused on web3. Nostr is the new Decentralized Twitter. Many of the bitcoiners are using it. Creating a Nostr “account” seems a lot like SSI
joseph.l.hunsaker
joseph.l.hunsaker
Maybe we should see if KERI is compatible, and possibly make a PR
joseph.l.hunsaker
I am using Amethyst
joseph.l.hunsaker
But there are many UI clients built on the protocol
I'm on there (though I don't really participate yet). npub10tp6kq6cwry0ag0qz3c0kfwmlnygm8kzd2hhk2hhm2mt6u8uu37sv8z97s
joseph.l.hunsaker
Yeah, I have followed a few people. Its very interesting. Similar UX to twitter, but still a little raw.
It seems like `WebOfTrust/keri` makes the most sense to me as that will be the focal point of KERI moving forward. My second vote would be `WebOfTrust/awesome-keri`
The issue with the way it is defined now, as I see it, is that they use raw public keys rather than KERI identifiers which was probably done for simplicity. There is nothing saying KERI couldn't back this kind of system, and in fact if everyone was using non-transferable keys many things probably simplify, but the user wouldn't be able to retain control of their account in the case of compromise. If instead, a message was tagged with an identifier, the solution becomes more complex but users can recover and protect their account.
I only researched for a short time, so someone please correct me if my understanding is wrong.
joseph.l.hunsaker
Thanks for taking a look Jason. Again from a top level, I'm interested it how a new user would be in introduced to Keri identifiers. OOBI or otherwise. I assume they wouldn't care or don't need to know it is Keri. I assume it would be some contact card on steroids analogy.
joseph.l.hunsaker
Or maybe there is something similar to the oauth workflow that is just more KERI centric
joseph.l.hunsaker
I am just a very top-down thinker and I don't feel like I have been given a user-centered top view yet.
Imagine these two scenarios to start: 1. You want to purchase a bottle of wine and need to prove your age, but your driver's license contains all kinds of information you'd rather not share with the staff. Instead of giving them an id, you scan a QR code on their device and begin OOBI with the seller, and begin an exchange process of graduated and selective disclosure that ensure that if you have a credential or ACDC with appropriate chain of authority (back to something like a governmental body) proving you are 19, proof of this and your authentic, unexpired id photo is presented for verification by the employee via the exchange. Their system allows them to approve the sale, and you haven't disclosed your address etc. 2. You are arriving at the hospital and dread filling out forms. No need! With ACDCs issued via KERI and their associated records, proven authentic and signed by doctors and institutions, you can enter the hospital without filling any kind of forms. These are the two examples that unlocked the most new use cases in my mind for decentralized id in general. There are so many more.
I'm not sure if that's the kind of top down thing that will help you, maybe if you give me more guidance I can explain more of what I have thought about.
joseph.l.hunsaker
Those are great examples. I have thought of many more scenarios as well. It’s roughly starting to take shape. The way you described the “Alcoholic Beverage Age Proof” made it sound a little like the OpenID/OAuth workflow — where interacting with a third party consists of: #1) Choose an Identity Provider you want to use for interaction. #2) Sign-in to chosen Identity Provider. #3) “Review and Accept”, or “Review and Select” what data that will be exposed to the third party. Whereas, the KERI SSI flow would be something like: #1) Choose the KERI SSI option as an Identity Provider (no need to sign in, because your keys will be used). Then, if I have never completed the “Alcoholic Beverage Age Proof” with this vendor, I must: #2) Choose an “Age Disclosure” from a list of acceptable options for providing my age. #3) Choose a “Photo Disclosure” from a list of acceptable options for providing my photo. Where, for each disclosure, the third party has pre-selected the “list of acceptable” providers for their specific trust/reputation tolerance level. Also, the first time you do this, your choices are saved (as an ACDC?), so the next time these disclosures are presented, you will not need to select them again. And the process will end with #1, making for a better user experience. Additionally, any other third party that requires an “Age Disclosure” or “Photo Disclosure” and accepts the providers chosen will automatically resolve. Again, resulting in a better user experience. At least, that is what I would imagine.
joseph.l.hunsaker
<@U0448S72CQN> here is mine
Screenshot_20230318-182823.png
joseph.l.hunsaker
I followed you!
Back at ya!
One little thing to mention: Jack Dorsey hired Daniel Buchner. Buchner is the father of ION and Sidetree and explicit critic of KERI. Not always based on fundamental knowledge, but bias, imho. There’ve been heated discussions between Sam and Daniel Buchner in the past, where Daniel proved to not fully understand KERI and still dismiss it. So I don’t think KERI gets a warm reception at Nostr while Buchner is the CTO. However, I experienced Daniel as a friendly helpful guy the time that I documented Q&As for ION and Sidetree.
I have an anecdote: If I recall correctly, in a EFWG meeting when I first entered the space I asked Daniel directly if the DWN solution had provisions in it for the strong binding between identifier and keys that KERI allows by virtue of the identifier being a commitment to/derivation of inception. He replied 'KERI is a witness network! It's not the same thing. Of course we have cryptographically bound our keys to our identifiers.' I didn't understand at the time what DID methods were, etc, but I am quite sure he totally missed what I was asking, as he didn't provide any analysis, reasoning, or explanation that educated me. To the best of my knowledge the criticism is unfounded but I am not the right person to comment.
Also the pre-committed rotation of course, binds every subsequent set of keys to the identifier/inception. That was the feature I had trouble finding elsewhere. ACDC was just icing on the cake when I learned about it.
<@U04PXR871PS> There were some long threads in Issues and in Slack channel(s) at DIF KERI back then.
I found an example for you
nuttawut.kongsuwan
I have published my study note on the KERI whitepaper on Medium. I tried to summarize the important concepts in the whitepaper, prioritizing readability over comprehensiveness. I am not sure how useful this is for the community. But if anyone is willing to read and give feedback (or correction), I would really appreciate it. :slightly_smiling_face:
Very nice! It'd be great to extend this a bit and mark parts that are not implemented today and are in the planning. E.g. gossip between the witnesses, watcher role, etc. The reason it's useful is that the white paper draws a big picture, a theory, whereas the implementation covers many but not all aspects. So for anyone who wishes to learn about Keri and then build some use-cases around it, the white paper is not enough. BTW, there is a small typo at the beginning of the Blog : "*Part 2* dives into other peripheral topics in the KERI ecosystem such as Authentic Chained Data Containers _*(ACDA)"*_ - should be ACDC.
nuttawut.kongsuwan
<@U04RNMG8Z51> thank you for spotting the typo. Highlighting the features that have and have not been implemented sound like a good idea. Unfortunately, I am not well aware of the full extent of the current implementations. A community’s feature roadmap does sound like something very useful.
joseph.l.hunsaker
Thank you! This looks amazing.
joseph.l.hunsaker
I recently made a comment for OIDC
joseph.l.hunsaker
<@U04RNMG8Z51> :point_up: is this a “high level” (user centric view) of what you image with OIDC?
In a way. OpenID for VC Issuance/Presentations is an extension to OpenID that would allow wallaets to receive and present credentials. So the scenario you are describing should be well supported. The great thing about the Spec is that any type of credentials can be exchanged, the spec is "VC" format agnostic, if you will. You can check it out .
Now ,the EU has recently the Architecture Reference Framework for EU Digital Identity - an EU wide initiative for credential currying wallets. OpenID for Verifiable Credentials is at the center of the document. Which means that ACDCs can eventually be made part of it.
rodolfo.miranda
Does it say that the VC needs to be W3C VC?
rodolfo.miranda
based on , the VC can be:
Can be of any format used in the Issuer-Holder-Verifier Model, including, but not limited to those defined in [VC_DATA] and [ISO.18013-5]
where VC_DATA is the w3c vc data model and  [ISO.18013-5] is mDL .
rodolfo.miranda
go for it then!!!
rodolfo.miranda
I think it's a good match from GLEIF, eIDAS, and DIDAS
rodolfo.miranda
<@U04RNMG8Z51>, you already mentioned that it's VC format agnostic. I missed your previous comment
andreialexandru98
Are there any post quantum signature schemes implemented in CESR? I came across this and realized that for any federal use case they will not accept vendors that do not use post quantum encryption schemes. I understand that they are worried about national security but would figure that big companies will follow suite soon
The code tables allow for expansion to include new signature schemes and I believe <@U024CJMG22J> did some work on a poc to include Falcon one of the NIST shortlist entries
Thanks for sharing Henk! I had no idea this existed. Here's the money post imo:
Also for those following, another really cool feature besides the strong identifier/key binding in KERI is delegation.
joseph.l.hunsaker
Correct me if I am wrong, but in Sam’s ACDC presentation this week, I noticed that the “append to extend” model allowing back references forms a Directed Acyclic Graph (DAG)
rodolfo.miranda
You are right!
Sam was explaining that the immutability of the DAG is important for append to extend to scale, maintaining verification of previous portions to be cached and making future verifications incremental
joseph.l.hunsaker
Wonderful
Sam explains that in detail in the presentation he did on Mar 21 in the ToIP DMRWG call. I'll have the recording, meeting notes and slides in the next 24 hours.
It's actually noted that ACDCs chain into a DAG in the intro paragraph of the IETF ACDC draft:
Hey all, I made something that describes KERI/ACDC at a high level - hope it's useful! Thanks to <@U024CJMG22J> for the feedback!
nuttawut.kongsuwan
I love the diagram! If you wouldn’t mind, may I suggest that OOBI can also be added somewhere. Perhaps below CESR? Also, is it just me, or there are two boxes for ACDC? My svg rendering seems a bit odd.
Screenshot 2566-03-28 at 08.41.38.png
I would need to understand OOBI a bit more to write a good description. I understand how it works but haven't implemented it. Yes, <@U04H17ZEX9R>, your rendering is messed up somehow. Mine is on a black background, and the text doesn't overlap with the lines. I can make a PNG as well, I'll post one shortly. The reason there are two sections on ACDC is that one is about the spec and the other about the entity itself.. I wonder if there is a way to make this clearer.
Here's a PNG version
KERI and ACDC.png
nuttawut.kongsuwan
Thanks! How about “ACDC Instance”? But I guess it is a bit long.
This is excellent! Thanks <@U04HQD29Z7E>. Mind if I include this in my impending KERI Tutorial post?
Go for it Kent. Do you think I should add something about OOBI? It seems less core to me, but if people want it I don't mind doing it
I am reading about it, even though it's adjacent I can see why it was requested.
Yes, something on the OOBI is fundamental, in my opinion. I am just finishing up my lengthy blog post today and it has a whole section on OOBIs.
Are you addressing agent-to-agent discovery with a different mechanism? Or do you not need it for your use case?
No we will use it or something very similar, but at launch we likely can get by without it. So to me, we need KERI and ACDC to set us up for success in the future since they define how the data needs to look
But, as it is very useful
It does seem like the discovery mechanism can be swapped out for something else.
KERI and ACDC really are the core.
As long as you can resolve AID prefixes and credential schema SAIDs then you’re good to go.
And verify their KELs, of course.
Yeah exactly
We have some interesting ideas of how to roll out our product, come to IIW!
But back to the OOBI discussion
Do you think it's worth adding almost like an aside down in the space?
It is pretty useful
Yeah, I agree, just under the “Chained Data Structures” box?
Correct
That would make sense. The only other place it might make sense is just above the top ACDC block.
I'll see where it fits best
Once I have some text
Do you think it's sufficient to say something like:
KERI OOBI (Out of Bands Introduction) connects trusted KERI identifiers with discovered URLs to enable authenticated IP discovery.
Or am I trying too hard and adding ambiguity
Maybe I could finish it off with
, and it has the potential to replace DNS securely.
Maybe that's incorrect actually, it kind of depends on _something_ for discovery, it isn't really replacing it is it?
I'll leave this here for comments and make edits in a bit.
Sam's most recent white paper "Privacy Given Strongest Authenticity and Confidentiality" has a very nice section on what and why we need OOB set ups for secure, authentic communication: Perhaps there is something in there that helps...
What is the best way for me to learn about Keep? I’m reading through all the code and can get the general picture, though understanding how it is used would help accelerate my learning efforts. I’d like to rewrite my own UI on top of KERI and ACDC, specifically the Mark II Agent as we build it, to make sure I really understand this technology and how to make it usable. And, of course, I want to add both a desktop UI and a mobile app to my Abydos Tutorial for teaching new devs how to get up and running in the space quicker than I did.
I’ve already found this useful. Thanks!
Where do I put PRs for Keep? I just for updating WebOfTrust/keep with the latest changes from GLEIF-IT/keep(development) excluding the branding and witness pools.
<@U024KC347B4>
WebOfTrust/Keep is intended to be the community reference implementation for a user interface. It was donated by GLEIF but not maintained after the end of last year.
A potentially wonderful contribution to the community would be integrating a TypeScript Signify client library into it and modifying the UI to work as a signing at the edge client. I certainly don't have time to do anything with that.
I am down to add that. I’m putting in the time anyway to walk through all the user flows with Keep and understand all the code. A great test of my knowledge and way to ensure I actually learn would be to make this sort of contribution.
Is Signify-TS currently at a point where I could begin this work?
It has all the cryptographic primitives in it that it could be enhanced to act as a full Signify client. The actual client / server interactions have not been added to it yet.
A word of warning, we stopped working on the TS version because we are planning on integrating it with CESRide since it currently requires WASM for the cryptographic libraries anyway. And once we thought of that, we have been wondering if we would just want to WASM in Signifide instead of writing any client code in TS. Something we plan on talking about next week.
rodolfo.miranda
That's a talk I'll love to join too.
rodolfo.miranda
when you said crypto libraries, are you referring to libsodium? I'm checking
Both libsodium and blake3 required WASM.
rodolfo.miranda
is there a recommended blake3 library to bind?
This is what signify-ts is currently using:
I inherited it from the very early days JavaScript KERI lib keri-js. But when I realized it was WASM I looked for alternatives and found none
rodolfo.miranda
thanks. I was asking because Sam has a strong case for using libsodium and not other libraries.
Yes, we use libsodium for random key generation and ed25519.
In keripy and signify-ts
But libsodium does not have blake3
I'd echo Phils word of warning with regard to signify-ts, for me investing in Rust and the right wasm bindings is a better path
Says the :crab:
haha, barely
rodolfo.miranda
agree, I'm just looking at bindings that can be supported in react native
cool
rodolfo.miranda
the libsodium one says: • iOS • Android (coming in the next months) • Web :thinking_face:
I want to improve and simplify the process of agents discovering each other so connecting across KERI agents can be intuitive from a UX perspective for non-techies. I am trying to simplify the user experience of connecting KERI Agents to one another and in so doing am looking to automate as much as possible. I want to understand conceptually why I can’t replace the manual OOBI and challenge phrase sharing with a step that is performed by some matchmaking or discovery server. Essentially I’d like to have a mobile app have a listing of agents pre-loaded so that the user can just tap an agent to connect with, initiate an OOBI exchange, and then a challenge phrase workflow. What security guarantees do I lose if I have a man-in-the middle matchmaking or discovery service facilitate this? Do I open myself up to man in the middle attacks or is everything still reasonably secure since OOBI resolution results in key state verification, which cannot be faked?
You are simply providing a discovery mechanism so as long as each user resolves the OOBI (and verifies key state for themselves) and then does a challenge response you have not sacrificed anything.
rodolfo.miranda
Oobi is a way to map the AID with the URL where you can query the end verifiable kel. You can have those pairs in a DB or any place without decreasing security.
But you are changing the dynamic of OOBI exchange. You are asking people to publish an OOBI which eliminates the ability to have 1-1 connections. Not sure that helps anyone but public identifiers for thing like brands.
rodolfo.miranda
And that you still need to find a way to share the AID and the challenge words
Thanks Phil, this makes sense.
nuttawut.kongsuwan
Thanks :)
nuttawut.kongsuwan
I was recommended to read this article at IIW. It seems there is a disagreement on whether to include specific did methods in the spec. One of the motivations that makes me adopt KERI is to avoid this mess entirely and just use AIDs. “Summary of Technical Objection Having the DID WG standardize specific DID methods creates moral hazards that will inevitably reduce the decentralization of DIDs themselves. Whichever methods the group chooses will attain unprecedented acceleration and impact. It will take an increased attention from the WG focused on those methods, taking away from attention that could go to advance all DID methods. That attention will also increase the WGs familiarity with how chosen methods solve certain problems and those approaches will necessarily already have the blessing of the WG, making them preferred favorites over approaches used by other methods. In short, giving the DID WG the permission to focus its attention on selected favorites would directly undermine its ability to advance the work with equal respect for all methods.”
Any opposed to creating a #keri-dev channel? I have some technical questions and findings that would probably benefit others to see answered and presented.
(Not specific to Rust)
I guess I'm asking because I wonder if we should just archive <#C04PAL1NAHJ|keri-rs> and rename <#C025C56K2SC|keripy>, as another approach
rodolfo.miranda
+1
<@U03P53FCYB1> which approach are you +1 to?
Since we don't have history right now it isn't a huge deal to rework things since keri-rs will eventually be inaccessible if we stop posting there (it isn't getting a ton of activity, as most of my questions are general)
But I don't want to make that kind of decision without some kind of consensus process
Or maybe the intent of these channels is fine and I'm overthinking. Anyhow, please leave your thoughts here, everyone.
rodolfo.miranda
#keri-dev I think is the best approach for the moment
In the meantime, here's something very useful I distilled with a bit of work from <@U03EUG009MY>’s blog post:
[
    {
        "v": "KERI10JSON0001b7_",
        "t": "icp",
        "d": "EIaJ5gpHSL9nl1XIWDkfMth1uxbD-AfLkqdiZL6S7HkZ",
        "i": "EIaJ5gpHSL9nl1XIWDkfMth1uxbD-AfLkqdiZL6S7HkZ",
        "s": "0",
        "kt": "1",
        "k": [
            "DJv-oxWcj6crSUOvXdIppBhMuIBqoo6rr43xk6A8DWMp"
        ],
        "nt": "1",
        "n": [
            "EGEe3wLra7vhK27WvvnQhookAs2Dc1PPzSGpRtBC4f4N"
        ],
        "bt": "3",
        "b": [
            "BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha",
            "BLskRTInXnMxWaGqcpSyMgo0nYbalW99cGZESrz3zapM",
            "BIKKuvBwpmDVA4Ds-EpL5bt9OqPzWPja2LigFYZN2YfX"
        ],
        "c": [],
        "a": []
    },
    {
        "v": "KERI10JSON000113_",
        "t": "vcp",
        "d": "EAuvxnObLjS8E2Yn9-7twA8_ZqEsMC3vispqy26NVdaW",
        "i": "EAuvxnObLjS8E2Yn9-7twA8_ZqEsMC3vispqy26NVdaW",
        "ii": "EIaJ5gpHSL9nl1XIWDkfMth1uxbD-AfLkqdiZL6S7HkZ",
        "s": "0",
        "c": [
            "NB"
        ],
        "bt": "0",
        "b": [],
        "n": "AGHblRF4Vdcg_Q5Y4j7LTdFPR04qH4ARR2XrK-Xb6HRz"
    },
    {
        "v": "KERI10JSON00013a_",
        "t": "ixn",
        "d": "ED7MXKROzbMwLMnDojpcOnCLWYBMiDo0LY5DbCzy5TB-",
        "i": "EIaJ5gpHSL9nl1XIWDkfMth1uxbD-AfLkqdiZL6S7HkZ",
        "s": "1",
        "p": "EIaJ5gpHSL9nl1XIWDkfMth1uxbD-AfLkqdiZL6S7HkZ",
        "a": [
            {
                "i": "EAuvxnObLjS8E2Yn9-7twA8_ZqEsMC3vispqy26NVdaW",
                "s": "0",
                "d": "EAuvxnObLjS8E2Yn9-7twA8_ZqEsMC3vispqy26NVdaW"
            }
        ]
    },
    {
        "v": "ACDC10JSON0001d9_",
        "d": "EKriPO1JGk369l5zJGzfFn-yS0dazYu0ktCg8h5pxKZ3",
        "i": "EIaJ5gpHSL9nl1XIWDkfMth1uxbD-AfLkqdiZL6S7HkZ",
        "ri": "EAuvxnObLjS8E2Yn9-7twA8_ZqEsMC3vispqy26NVdaW",
        "s": "EIxAox3KEhiQ_yCwXWeriQ3ruPWbgK94NDDkHAZCuP9l",
        "a": {
            "d": "EBLyBuP_LUGSP3cDUKxQiCe86Sh5J4R6XXqz2hjVEc5H",
            "i": "EJS0-vv_OPAQCdJLmkd5dT0EW-mOfhn_Cje4yzRjTv8q",
            "dt": "2023-04-23T09:10:37.698209+00:00",
            "destination": "Osireion",
            "treasureSplit": "50/50",
            "partyThreshold": 2,
            "journeyEndorser": "Ramiel"
        }
    },
    {
        "v": "KERI10JSON0000ed_",
        "t": "iss",
        "d": "EHD81HVJGDhFWt82QCS_7II44Lqf9iO43ayt4aZ7wqWr",
        "i": "EKriPO1JGk369l5zJGzfFn-yS0dazYu0ktCg8h5pxKZ3",
        "s": "0",
        "ri": "EAuvxnObLjS8E2Yn9-7twA8_ZqEsMC3vispqy26NVdaW",
        "dt": "2023-04-23T09:10:37.698209+00:00"
    },
    {
        "v": "KERI10JSON00013a_",
        "t": "ixn",
        "d": "EKmNtSAju7d3KhdxzIdyZycZNgAWDKfW5YDRUy9eDHis",
        "i": "EIaJ5gpHSL9nl1XIWDkfMth1uxbD-AfLkqdiZL6S7HkZ",
        "s": "2",
        "p": "ED7MXKROzbMwLMnDojpcOnCLWYBMiDo0LY5DbCzy5TB-",
        "a": [
            {
                "i": "EKriPO1JGk369l5zJGzfFn-yS0dazYu0ktCg8h5pxKZ3",
                "s": "0",
                "d": "EHD81HVJGDhFWt82QCS_7II44Lqf9iO43ayt4aZ7wqWr"
            }
        ]
    }
]
It's a full set of messages required to issue an ACDC (without sigs, but that's the easy part)
Okay <@U03P53FCYB1> I think that's easiest too
If another person gives it a +1 I'll just do it
The tricky part about those messages was getting the issuance VC TEL event, and then figuring out the order. The trick to the ordering is that you highlight the `d` field in an editor and search for it, and ensure that the first occurrence is when it is the said of the top level entity. So essentially, go through the top level messages and for each `d` attribute near the beginning, do a search and make sure that one is #1.
I think it would be valuable to be able to say `kli vc complete -n name --prefix EKriPO1JGk369l5zJGzfFn-yS0dazYu0ktCg8h5pxKZ3` and get this stream of messages, with sigs. It's a portable, verifiable block of data supporting the acdc in question.
maybe not complete, but something like it.
complete has a verb meaning
Oh and in that set of messages, the first two are one time things.
For completeness, here's the schema used:
{
    "$id": "EIxAox3KEhiQ_yCwXWeriQ3ruPWbgK94NDDkHAZCuP9l",
    "$schema": "",
    "title": "Treasure Hunting Journey",
    "description": "All of the facts regarding a treasure hunting journey including destination and treasure splits",
    "type": "object",
    "credentialType": "TreasureHuntingJourney",
    "version": "1.0.0",
    "properties": {
        "v": {
            "description": "Credential Version",
            "type": "string"
        },
        "d": {
            "description": "Credential SAID",
            "type": "string"
        },
        "u": {
            "description": "One time use nonce - optional",
            "type": "string"
        },
        "i": {
            "description": "Issuer AID",
            "type": "string"
        },
        "ri": {
            "description": "Credential Registry Identifier",
            "type": "string"
        },
        "s": {
            "description": "Schema SAID",
            "type": "string"
        },
        "a": {
            "oneOf": [
                {
                    "description": "Attributes block SAID",
                    "type": "string"
                },
                {
                    "$id": "EPfNU6jej4GGyprpakq6KCO9os9vp9jLIRcH1xJqVezj",
                    "description": "Attributes block",
                    "type": "object",
                    "properties": {
                        "d": {
                            "description": "Attributes block SAID",
                            "type": "string"
                        },
                        "i": {
                            "description": "Issuee AID",
                            "type": "string"
                        },
                        "dt": {
                            "description": "Issuance date time",
                            "type": "string",
                            "format": "date-time"
                        },
                        "destination": {
                            "description": "The target location for this journey where the hunters will go.",
                            "type": "string"
                        },
                        "treasureSplit": {
                            "description": "The type of splits for this journey. 50/50, 25/25/25/25, and so forth. Must add up to 100.",
                            "type": "string"
                        },
                        "partyThreshold": {
                            "description": "The minimum party member threshold needed to charter this journey",
                            "type": "integer"
                        },
                        "journeyEndorser": {
                            "description": "The AID of the ATHENA inner circle member endorsing this treasure hunting journey.",
                            "type": "string"
                        }
                    },
                    "additionalProperties": false,
                    "required": [
                        "i",
                        "destination",
                        "treasureSplit",
                        "partyThreshold",
                        "journeyEndorser"
                    ]
                }
            ]
        },
        "r": {
            "oneOf": [
                {
                    "description": "Rules block SAID",
                    "type": "string"
                },
                {
                    "$id": "ENe_FPWFO867EZ1HKQZwDu9GpDhpyGDFwwZFQYK8_ZaD",
                    "description": "Rules block",
                    "type": "object",
                    "properties": {
                        "d": {
                            "description": "Rules block SAID",
                            "type": "string"
                        },
                        "treasureSplitAgreement": {
                            "description": "Treasure Split Agreement",
                            "type": "object",
                            "properties": {
                                "l": {
                                    "description": "Associated legal language",
                                    "type": "string",
                                    "const": "Signing of the Treasure Hunting Journey split agreement binds the signers to split the treasure received from the journey according to the split terms specified.."
                                }
                            }
                        }
                    },
                    "additionalProperties": false,
                    "required": [
                        "d",
                        "treasureSplitAgreement"
                    ]
                }
            ]
        }
    },
    "additionalProperties": false,
    "required": [
        "v",
        "i",
        "ri",
        "s",
        "d",
        "a"
    ]
}
There is a `kli vc export`, does that do what you want?
Existing now
As far as I could tell that only exports the ACDC, not the supporting KEL and TELs required to verify. I guess what I am saying is, if we could package up everything needed to verify an acdc, and then make chaining automagic and build a long, presequenced list of required messages to verify the last acdc in the chain (and the couple events that follow it), it would be great. I am going to build out that functionality for Qui, so I'll take a stab at adding it to the `kli` if you want.
The other one to look at is `kli vc present` with the `-i` option.
I'm fairly certain there is code in there to export the entire chain.
Interesting, I'm getting this
ERR: can only join an iterable
when I run
kli vc present -n wiseman -a ramiel -s EOB9YVK7UY1BZ2gDWNcFLBMVzcOmPuh6DR6Yg1Zen9qV --include
(with or without --include)
whereas this returns:
❯ kli vc export -n wiseman -a ramiel -s EOB9YVK7UY1BZ2gDWNcFLBMVzcOmPuh6DR6Yg1Zen9qV          
{"v":"ACDC10JSON000591_","d":"EOB9YVK7UY1BZ2gDWNcFLBMVzcOmPuh6DR6Yg1Zen9qV","i":"EIaJ5gpHSL9nl1XIWDkfMth1uxbD-AfLkqdiZL6S7HkZ","ri":"EAuvxnObLjS8E2Yn9-7twA8_ZqEsMC3vispqy26NVdaW","s":"EEq0AkHV-i5-aCc1JMBGsd7G85HlBzI3BfyuS5lHOGjr","a":{"d":"ELKskAf1BKsld9UpOyFwBv4TiIKoR6QLlguDfEeoGS2_","i":"EJS0-vv_OPAQCdJLmkd5dT0EW-mOfhn_Cje4yzRjTv8q","dt":"2023-04-23T09:11:45.768793+00:00","partySize":2,"authorizerName":"Ramiel"},"e":{"d":"EKU0VqkWIcpMQJxu-oVFkE2Fsf-J6tA_oKbOpSxOGImd","mark":{"n":"EAhQ4wojPhBx6evAl-6mHv9enCKLOfDRoK-tv5dj0BoA","s":"EBEefH4LNQswHSrXanb-3GbjCZK7I_UCL6BdD-zwJ4my"},"journey":{"n":"EKriPO1JGk369l5zJGzfFn-yS0dazYu0ktCg8h5pxKZ3","s":"EIxAox3KEhiQ_yCwXWeriQ3ruPWbgK94NDDkHAZCuP9l"}},"r":{"d":"EHP7ROagNEPGR2VaQeghzoCqzwOMuXgZwJUFWn3D-k3K","usageDisclaimer":{"l":"Usage of a JourneyCharter Credential in Abydos may significantly reduce your life expectancy."},"issuanceDisclaimer":{"l":"All desiring entrance into Abydos must present credentials issued by the Veniordine."},"privacyDisclaimer":{"l":"It is the sole responsibility of Holders as Issuees of a JourneyCharter credential to present that credential in a privacy-preserving manner using the mechanisms provided in the Issuance and Presentation Exchange (IPEX) protocol specification and the Authentic Chained Data Container (ACDC) specification.  and ."}}}
I'll dig into it
Enjoy your weekend <@U024CJMG22J>, thanks for responding
`--include` looks like it should work, thanks - I think it's because I'm not setting up a recipient or mailbox or something
Maybe this is my point, I'd like to be able to just dump and ingest things with kli, without agents and mailboxes, piping stuff if possible
I am willing to help take it there, if that makes sense
To answer your question, for sure if there is a feature you'd like to add to the kli, please open a PR. That's the way OSS works!
Yeah will do. I don't know when I'll get the time, but as it will save me some during development I may be able to prioritize it.
I would also say we should enable GitHub discussions on keripy/cesride if they aren’t already
We do not pay for slack so messages will disappear over time
I can do that today
Okay I’ll transfer this thread to a discussion about building acdc issuance from primitives. I’ll add code for all the required bits when I have it, in rust
There are sections, including a 'show and tell' one, so I'll create another discussion with sample code when I am ready.
Just found the `--full` option on `vc export`, I must have totally missed the other options.. It does what I want but maybe not in the right order
I was surprised it didn't have that, I was going to check in the morning. Order doesn't matter. All implementations should assume that events are asynchronous and can arrive in any order.
Now you can improve the performance of someone receiving your events for sure, but it should not be imperative.
I guess you could also juggle them when you got them, if you got them in a single package. I am trying to figure out how to get from A to B in small iterative steps, is all, rather than implementing the whole thing before I see pieces of it work.
We did do an optimization pass in credential issuance code to try to remove the need for the holder to have to escrow anything in a credential chain. That is probably what you are looking for. I'll see if I can find it in the morning.
Yes exactly, I was wondering how to avoid escrow for now
Cool, I'll find it first thing tomorrow
Thanks Phil
The idea here is really cool, a self-contained set of verifiable data. It would make sense that the presentation logic would send this along.
+1 to keri-dev. It seems like a good place to collect cross-implementation chatter or thoughts for implementations that are not active enough to justify their own channel yet.
I created <#C055FTYLLBS|keri-dev> and posted this message of my success in Rust producing a valid ACDC which was consumed by the KERIpy stack:
Anyone interested in joining a `did:keri` working session, we will be meeting on Friday April 28th at 10:00am PDT here: If there is enough interest, we may schedule a bi-weekly meeting, create a channel, etc etc.
Awesome, see you there.
The TADAAA definition is contained in the IIW notes on the session entitled “*Delegatable Verifiable Credentials”* Tail Authenticated Delegated Attenuated Aggregated Authorizations
nuttawut.kongsuwan
Is it given a proper definition in any of the Sam’s whitepaper?
No, it was new from that session at IIW
nuttawut.kongsuwan
So a new whitepaper in the making!
Do we know if there have been any third party security evaluations of KERI? I had an associate ask me this question yesterday.
Another question, do we know if or whether Estonia solved the key rotation problem in XROADs or any succeeding system? And if they did solve key rotation, how did they do it?
I have a client asking me for a list of companies that currently use KERI. I gave the following list. 1. GLEIF (first in the world) 2. Provenant (telecom) 3. Qui Identity (going live in about five weeks with non-custodial mobile wallets) 4. The Human Colossus Foundation in the EU. They have many different clients all using KERI and ACDC. 5. RootsID with a Cardano blockchain integration Who else should be on that list?
Custodial first
For us
And our 5 week thing is very limited white glove, it's not for public
I mean, the stuff will be viewable by the public, but participants will be invited
nuttawut.kongsuwan
You can also mention that there are 9 candidate Qualified vLEI Issuers undergoing the qualification process by GLEIF. I believe a few are in this Slack channel.
daniel.hardman
new thread about IRL credentials
daniel.hardman
As discussed on today's KERI call, here is an initial PR for a schema for credentials of this type. I'd love further discussion.
petteri.stenius
Does anyone have a link to ACDC meeting recordings? Last week especially. Can't find any recent recordings on the Thanks!
petteri.stenius
This takes me to the meeting. How do I get to recordings?
When someone identifies himself with an ID card, some devices check if the card is fake. Shouldn't it be worth adding to the schema an option to add "the credibility of the certificate was checked by device X"? Allows adding a choice of guarantee level of reliability.
daniel.hardman
<@U02MD0HA7EJ>: it's not clear to me what certificate you are referencing. Are you saying that the credential we issue should have a field that records one or more device checks of that very credential?
I meant, apart from the person who checked the person's identity face to face against a passport for example, an optional field *validation machine*, where the name is written and the type of machine X checked the passport itself as not fake (which allows confirmation that the passport itself with which the person identified himself is not fake) Certificate refers to a passport or any official ID of a person
The field is relevant, in case the entity you are identifying yourself to is not satisfied with the confirmation of your face-to-face identification by a third party, but also wishes to verify that the third party has checked your physical ID by a machine that checks for forgery
<@U03RLLP2CR5> I love the IRL idea and it's relevance to AI phone scams. Do you envision a way it can be used like Two Factor Authentication on phone calls, email, chat messages?
<@U03U37DM125> I've updated the ACDC meeting page with previous recordings, sorry for the delay.
Who's going to the EU version of the IIW in Zurich? () ? Would be great to meet in person and have some KERI sessions :slightly_smiling_face:
I would love to be there, but can’t make it. Please highlight any good reports. Hopefully some one covers it on social media.
nuttawut.kongsuwan
May I suggest we create a Q&A channel? I think we have a lot of newcomers with a lot of questions. So basically a channel or anyone to ask any questions related to KERI.
Maybe GitHub Discussions would be more accessible
We could use the Edu channel for this?
At least this was included in our original objective
nuttawut.kongsuwan
Thank you!
It definitely has more longevity
Other than the IETF Draft Specifications and the W3C Standards KERI is a part of what other standards, security or data, do KERI and ACDC meet, exceed, or come close to? I’m building a sales pitch on KERI and ACDC and need to answer the question of spec compliance or conformance.
I’m seeing this from the KERI_Overview.web.pdf document: > Supports GDPR Compliance: Separable identifier trust bases. Non-intertwined KELs. Supports Electronic Digital Signature Compliance (EiDAS, UETA, E-SIGN):
From slide 207 in the same document are the definitions of those legislation items: > USA Electronic Signatures in Global and National Commerce Act (ESIGN), the USA Uniform Eletronic Transactions Act (UETA) and the EU Regulation for Electronic Identification and Electronic Trust Services (eIDAS)
And E-CFR > Electronic Code of Federal Regulations: Electronic Signatures (E-CFR) regulation requires non-repudiable audit trial attribution
Nice deck by JP Aumasson (the author of Blake3, etc) on post quantum cryptography
Very good slide deck indeed. I especially appreciate the sense of humor.
Yes, a very well done deck. Thanks <@U04RNMG8Z51> . The humor makes it easy to read
daniel.hardman
<@U03KLB48Q2E>: yes, if the risk to eliminate is a risk of non-personhood.
daniel.hardman
I decided to rename these credentials to "face-to-face credentials" as "IRL credentials" is too dependent on a geeky English acronym. "Face to face" translates into other languages easily, where IRL doesn't.
daniel.hardman
<@U02MD0HA7EJ>: I have pondered your comments for a couple weeks now, but haven't yet become comfortable with the idea. The reason is that I feel that it encourages a confusion about the purpose of the credentials. But maybe I'm still misunderstanding your vision. I wrote a markdown file that clarfies the vision/purpose/use cases of the credentials a bit. See /face-to-face/index.md () I think the field you're imagining might be called something like `machineCheck?`? You described its relevance by referencing a face-to-face identification and imagined that the issuer has checked the issuee's physical ID. However, I don't imagine that an issuer does any identity checking at all before issuing this credential. That's because the purpose of an IRL credential has little to do with identification. The credential is supposed to prove that the issuee is a human, NOT to prove the human issuee's identity. I did put an optional field in the schema (`knownAs`) that would allow the issuer to say, "here's a name that I use to interact with the issuee" -- but if it's used, the issuer is not claiming that the name is legally correct or that it's an identifier others should trust or use -- only that it's something they themselves use. So if Alice and Bob meet at a conference, and Alice issues Bob an IRL credential, she could say that the issuee is `knownAs` "BobTheIncredible" to her, because that's what his nametag said -- but she wouldn't care at all whether that was his legal name or just a nickname he felt like scribbling on a nametag. The vital thing she has to check is that the party receiving this credential is a human. I also put an optional field in the schema for `confirmedHandles` . This is supposed to list email addresses or social media handles that the issuer has used to communicate with the issuee. Perhaps this overlaps a little bit with identification, clouding the purpose of the credential? However, if someone is a work colleague, and knows perfectly well a co-worker's github handle, it seems convenient to let them say so. Like `knownAs`, this is not making a claim about legal identity; if someone's Twitter handle is @bobTheIncredible, it still doesn't mean that's his name. There is a final way that the current fields in the schema seem to overlap with what you are imagining, and perhaps this is the one that we should explore further. It is the `basis`field. This field intends to give a verifier some idea how deep the issuer's knowledge is of the issuee. So an issuer that says the basis of their credential is "family" is saying they've known the issuee intimately for years; "colleague" is another fairly strong but slightly different basis; "acquaintance" is the basis that Alice should assert for Bob when she meets him at a conference. I listed in example values for this field the values "nist-ial2". This would be a basis where someone would have to compare a person's live face to a credential like a passport. So if that basis is claimed, a `machineCheck` *might* make sense. However, I'm still not sure it's optimal, because if a person actually used a machine to check the validity of a passport, I suspect that the person who did this was working for an institution that requires NIST-IAL2 vetting. Thus, the issuer should be an institution rather than a person. That would make this a different credential type -- not person-to-person, but institution-to-person. What are your thoughts?
Just checking: Will there again be a strong KERI/ACDC delegation next IIW Oct 10-12? Now that the USA has let the unjabbed off the hook, I am able to join the party.
Currently I believe Phil, Karla and myself will be there from GLEIF
Great, would be nice to finally meet in person.
I'm sure Sam will be there.
Thank <@U03RLLP2CR5> for clarifying, if I understand correctly, the goal here is not to "officially" identify who the person is, but to recognize that this is a human being. I read through the file you wrote. You mentioned in the goal section that the goal is to recognize that it is a person and not an AI bot for example. If I understand correctly, the schema has two optional purposes 1. To imitate a face-to-face meeting where connections between people are created by friends bringing friends, thereby creating new connections and trust. 2. Give a scheme to "identification companies" to fill in information about a real, living person. And I'm trying to figure out the solution. I mean, who is the solution for? If this is case 2, why do we need a separate scheme? These are companies whose business is identification, which means they probably already have a scheme that describes a human being. One or two optional fields can be added to indicate that the person has been identified as a living person. If case 1 How is the problem solved? The average person has two levels of reputation. The first level consists of dozens of social connections that you can trust directly The second level is the reputation a person creates for himself in the general public through social networks or any other public publication. If I understand correctly, the solution is intended for applications that want to prevent a non-human entity from registering as a user several times, otherwise, what does it matter if he is a human or a bot? So the first level is less relevant because it is very limited in scale. This is because the identifier issuer does not have enough reputation for an application to rely on. On a second level, we transfer enormous power to famous people, whom we can apparently trust if they saw someone they really see (they protect their reputation and do not make money from creating an identity). And even then, one person can contact hundreds of celebrities and create hundreds of identities for himself. The question is, what is the difference between him and a bot? Except for the limited duplicate quantity A third case I saw in the presentation in the link you posted was a community. I understand that the reference here is to the community that knows each other and wants to rely on facial recognition to add members to the community. What is the purpose of identification? Prevent a talking bot that wastes community time with fake conversations? In this case, doesn't it make more sense to send a personal invitation to those interested in joining? It may be that I don't understand the scheme's intention or what solution it is intended for. But I would appreciate an explanation.
Finally. What a disaster these policies have been. It will be good to see you. I am planning on being there.
daniel.hardman
I don't think the purpose should contemplate your use case 2 (identification companies) at all. That's because the credential must be issued by a person, not an institution. Identification companies never issue credentials as *individuals*; they issue as *institutions*.
daniel.hardman
> If case 1... If I understand correctly, the solution is intended for applications that want to prevent a non-human entity from registering as a user several times, otherwise, what does it matter if he is a human or a bot? This doesn't prevent Sybil attacks, because one person could hold many of these credentials, and register once with each of them. Rather, this prevents an AI or a device from registering AT ALL, and is agnostic about whether a person should be able to register more than once. But your language presupposes that the verifier of the credential will be an org that needs to register users, and that kind of hides the main use case in my mind: *what if the verifier is an ordinary human?* There might not be any "registration." What if Alice wants to have a private chat with a person she met on the internet, but she wants to make sure it's not a bot? What if an author wants people to give feedback on the draft of his new novel, but only if they are real human beings? (Almost EVERYTHING in SSI right now ignores ordinary people as verifiers. I wrote about this problem here: )
Who could improve this definition of “Run off the CRUD”?
I originally phrased it during one of the meetings, but can’t decipher it myself now :sweat_smile:
How about this: RUN off the CRUD. RUN stands for Read , Update, Nullify. Why is it preferred ('run off') over the CRUD (Create, Update, Delete)? Consider the need to protect 'authentic data' in a decentralized environment. In a decentralized control model, the data always originates from a controller (aka client). The data created (sourced) by the controller follows the principle of 'Non-Interactive Replay Monotonicity' to be able to protect the data from a replay (events are changed) or a deletion (some events are deleted) attacks. That is to say, the data (or events comprising it) is never deleted, it's rather always added to via updates. Each update, therefore, forms a verifiable, continuous log ( e.g. by providing growing sequence number, date timestamp, etc for each update). To enable invalidation of data, a special update, called Nullify, is used. The client, therefore, updates the server (it's peer or peers), which just maintains the log following certain rules (see BADA - Best Available Data Acceptance). To summarise, the server can only Read the log, add Updates to it, including Nullifying ones. So no Create or Delete.
Great! Thx
I can paste it in, or you can do it an receive the “credits” in the commit message.
OK, will do!
OK, done!
Hey everyone, Today, I sent a personal letter to a specific group of KERI suite developers who came just after the leading group (Sam, Philip, and Kevin). If I mistakenly included or excluded you from this group, please let me know so I can correct any errors. The reason we are targeting this group is because they are the primary audience for our Edu site. We believe it has great potential to provide immense value to developers who possess technical skills and fundamental knowledge in cryptography, SSI, and autonomic identifier systems. In the letter (scroll back in the Edu channel), as well as in this announcement, we want to update you on the progress of our site, the “KERI suite search engine (KERISSE): Personal learning environment and consensus building with others.” Your input as developers is crucial in shaping the design and functionality of KERI/ACDC and the KERI suite search engine (). Our goal is to achieve consensus on the meaning of terms within the KERI suite, with a high level of understanding (minimum output level 7, explanation ). We aim to provide a trustworthy resource-scraping process, a glossary service, sharing of search results, exploration of historical concepts and choices, and enhanced visibility and questioning of code structure. Your thoughts and suggestions are invaluable in shaping the KERI suite search engine. Mind you, it’s targeted at KERI developers “second batch”, so please be one or think like one :wink: . You can provide feedback directly or in the HackMD document . We eagerly await your insights and appreciate your contributions to this exciting project. Best regards, Henk
Hi everyone, nice to meet you all! I'm currently contributing to a DAO that's aiming to leverage KERI to build out distributed / bottom up use cases. There was some confusion with the DIF/W3C, keriox etc... but feels like I'm in the right place. Btw <@U03EUG009MY> your YouTube videos helped point the compass, thank you. Really looking forward to digging in, learning and contributing.
<@U059G2AQ1B5> Glad you found your way. Welcome to the journey!
What is a good way to illustrate the need for watcher networks, verification nodes in KERI? I had more difficulty than expected explaining why, to an engineer, the watcher network is needed. I explained that, according to the duplicity game, Alice can lie to Bob about what she said unless she tells Cedric the same things (KELs and KERLs) that she told to Bob. Yet the engineer struggled with the concept because Bob can still cryptographically verify everything Alice tells him. If Alice says two different things over two different points in time then is it always provable duplicity? If so, then can’t Bob just choose to trust the first version of events that he sees and ignore all others? Or is that not good in the case where Alice was hacked and the hacker published events with her signing keys before she rotated to new ones? Essentially, how do we describe the value of watcher networks? I thought I had this memorized though it turns out I stumbled more than I thought I would. I get it though I am not yet good at explaining this to people. If you know of a place in the whitepaper or any presentations then please point me to them and I will go review those materials. I need to be able to clearly and succinctly justify to members of my engineering team the architectural necessity of watcher networks.
nuttawut.kongsuwan
Here is my mental model for KERI watchers, which may not be accurate. I would like to separate two cases: (1) controllers are typical end users and (2) controllers are organizations that issue credentials. For end users, watchers are probably unnecessary most of the time. Alice can send a KEL to Bob and another inconsistent KEL to Cedric without causing any problems. Duplicity would arise only in the case where Bob and Cedric need to exchange their data about Alice. In this scenario, either Bob or Cedric may set up watchers to monitor Alice’s witnesses. For organizations, they may issue a lot of credentials (ACDCs) to many people, organizations, or even to the public (as untargeted ACDCs). In this case, I believe it is necessary to set up watchers.
nuttawut.kongsuwan
I can give an extreme example where a government is behaving duplicitously. In this case, the government’s witnesses are designed to promulgate different KELs to different groups of people. Let’s say the president of that country issues an (untargeted) ACDC and makes it available to half of its citizens using one version of its KEL. Then, the president uses another version of its KEL that is inconsistent with the previous one to issue another ACDC that is made available to the other half of its citizens. Without watchers, the citizens might not be able to detect that two inconsistent KELs have been used. The government may also repudiate one of its ACDCs at a later time. If watchers are available, on the other hand, the citizens will be able to tell immediately that the government is doing something fishy.
This is a good example. This clarifies that watchers ensure that one party cannot tell two different versions of the facts to two different parties about getting caught as long as the two parties check with the same watcher network, or check with each other.
rodolfo.miranda
By watchers do you mean witnesses? Duplicity is detected by the witnesses. Then, watchers can validate the KEL and receipts signed by the witnesses to detect misbehavior.
I meant watchers. Is duplicity detected only by witnesses? Or could a controller use two different witness sets to defraud people in a way that would only be detectable by watchers?
rodolfo.miranda
ok, you can have two identical witness sets that can be part of the scam. In that case, I agree that watchers can help detect the inconsistencies.
nuttawut.kongsuwan
<@U03P53FCYB1> The witnesses are assigned by a controller. My understanding is that a malicious controller can also hack the witness KA2CE protocol such that they behave duplicitously.
nuttawut.kongsuwan
I believe the witness KA2CE protocol is only useful when the controller is honest, and some of the witnesses (that are fewer than the BFT threshold) are compromised. If the controller is malicious, or the witnesses are compromised above the threshold, then watchers are needed to detect duplicity.
Hmm. Controllers publish their witness sets ( OOBI). Each witness maintains a hash linked, witness and controller signed "micro ledger" (aka KEL). Assuming that witnesses are not compromised ( RUN and not CRUD) the duplicity originates from the controller ( it signs all the events in the chain, collects and distributes the witness receipts). In other words, Alice would need to construct completely different versions of the KEL and get its set of witnesses to re-write the history for duplicity to reach the public. If Alice, on the other hand, publishes different sets of witnesses ( not via rotation - it retains history) then it means that there are two version of the Alice's identity in the wild. But these two versions are consistent ( because of the write only KELs).
Watchers can be used to both detect rewritten history (compromised or incorrect witnesses) and verify identity's messages and ACDCs. Any agent that has to deal with KERI identities ( same as prefixes) has to maintain a local copy of the identity's KEL and watch out for rotation ( rot + drt) and interaction (ixn) events in order to maintain the latest set of keys, backers ( aka witnesses) sets, anchors for delegated identities and TELs. While doing so, watcher must obtain the logs on periodic basis and compare them against the stored and verified KEL ( first time obtained via OOBI). Store the difference. Due to chaining ( sn + p, p being said of the prev event) any attempt to rewrite history will be detected. At this poin, a watcher can poll other witnesses in the set and thus detect "lying" witnesses or determine that prefix is not trust worthy ( assuming the first OOBI check was performed agains all witnesses). Furthermore, one can use "first seen replay couples" ( sn + timestamp) to determine the moment in time when the attempts to rewrite history started.
I really like the goal of leveraging “trust that is cheap, common, and neglected by heavier-weight mechanisms” It seems the hardest part of this credential schema is agreeing on the schema. Adding biometric hashes and confirmed handles, as well as the “basis” field and “caveats” seems too complicated to get started. Great roadmap features, yet so much optionality there, and difficult to constrain without knowing how to limit it well. I suggest starting with only four attributes. knownAs, minutes, dt (timestamp), and a very constrained “basis” field so that it is easy to use and hard to confuse. Since the goal is to establish personhood then online handles aren’t really necessary or can be condensed into the knownAs field as a separate credential for each online handle. why not use a multiplicity of credentials? Or just link them as separate credentials in a graph to the base personhood credential? You could maybe even simplify it to three, known as, minutes, and dt, and then have basis be a separate credential as shown in the following diagram.
With the faceToFaceCredential as the root for a given interaction between two people you can then layer in additional facts as they become relevant to a given relationship. Then, when that relationship is shared with another person, then they can consume as many additional bases or handles as they require to reach a trust assurance level they are comfortable with.
Having multiple bases for the relationship allows it to evolve over time, from a simple name into something more as the story unfolds.
I know it could seem silly to say the original model was too complicated and to then suggest a rather complicated looking credential graph. The way I see it each individual credential should be simple, representing a specific kind of interaction. You combine many small credentials to make a credential graph of something overall that is larger. This allows you to have fine grained expression of each part of the reputation while keeping each individual action simple and quick.
Good to meet you, Paul. What’s the DAO, if you don’t mind? I ask because I find the bottom-up use cases to be the most interesting — and impactful in the long-term. (I’m creating services at to work this out.)
nuttawut.kongsuwan
I just wrote an explainer for the SAID and ACDC specifications. I feel I am not doing a very good job on the last section for Graduated Disclosure. I would really appreciate it if anyone can give me feedback and/or correction.
I like the build up of the explanation. A few notes: to further a transaction -> to extend an interaction or exchange? the recipient agrees to the terms established by the discloser -> afaik also the issuer can impose rules on the payload, that the disclosee has to satisfy, however I am not a 100% sure about this.
nuttawut.kongsuwan
thank you!
I saw a couple more PRs merged into the edu site repo and decided to see what it looks like so far. I am so impressed with all the work Henk and Cord have put into gathering and indexing all the information about KERI. It is very comprehensive already! If you haven't taken a look, I recommend you do.
Thx, Cord => Kor (last name: Dwarshuis)
daniel.hardman
I like your thinking, Kent. More granular is better WRT credentials.
daniel.hardman
I wanted to attend today's KERI call, but I may end up stuck on another call. (I wanted to talk about SADs and SAIDs for opaquely structured data, but that is actually more of an ACDC topic, so could well be done next week instead.)
Suggest that the "pre-2023" Agenda HackIO.md file (created to offload the "current" file due to HackMD file limits) have it's URL embedded in the "current" Agenda file.
nuttawut.kongsuwan
May I ask anyone here where I can find the record of the Tuesday meeting? Thanks!
I can't find the link for today's meeting and the link I had in my calendar seems to be old, could someone share it by any chance? Thanks!
We wrapped up the call early. The link should be at the top of <#C055FTYLLBS|keri-dev> I assume that is what other folks use
if you mean the keri-dev meeting
Thank you <@U024KC347B4> and <@U02PA6UQ6BV>!
Really excited work by Henk and Kor!
This message was deleted.
michal.pietrus
the multisig feature you're looking for, internally employs public keys for signing keys and their digests for rotation keys. You don't need to create any additional identifier, because in the end the multisig feature relies on solely one identifier that just has several members (public keys) and associated thresholds. Actually thanks to recent changes to the protocol, you can establish `personal` , as you call it, identifier, and i.e., two rotations later join more public keys (more devices) into it, making it a group in essence.
Right, but this becomes pretty complicated when we scale. What I originally wanted is to have one group identifier that is controlled by n personal identifiers which themselves are controlled by n keypairs. But it is actually implemented like this: One group identifier is controlled by n keypairs (the group itself does not know which of the controlling keypairs belongs to alice personal identifier and which belongs to bob)
michal.pietrus
what type of scale problem you have in mind ?
I think in parctice it can become pretty tedious when we generate one keypair per device. It makes sense to rotate the personal identifier, everytime we add a new device with a new keypair but if the user is a participant in 50 other groups, not only would he need to rotate his prersonal identifier to add the new device keys, he would also need to rotate the other 50 groups he is part of.
Assuming every user stores the kel of every group he is part of, this would mean that every user needs to fetch the new kel for the group, just because one participant of that group added a new device.
In other words, if alice wants to add bob to her multisig, she needs to know all the device keys of bob. I am searching for a way to achieve some sort of group functionality, where alice only needs to know bobs personal identifier.
And this could be achieved by adding new members as witnesses or delegates instead of rotating them in as actual controllers via their private keys.
daniel.hardman
Link to paper about saidifying arbitrary document types:
daniel.hardman
BTW, I was a bit hurried and forgot to thank <@U02PA6UQ6BV> for some constructive feedback on the paper. Thanks, Henk!
Is it possible to calculate the scid of a group participant by just looking at the groups kel?
nuttawut.kongsuwan
I had a similar problem with embedding large image/video files in ADCDs, and I don’t want ACDCs to get too big when they are transmitted around. I really appreciate that you wrote and shared this document. If you do not mind, I have some dumb questions due to my lack of understanding of how metadata works in typical opaquely-structured-data. • When an opaquely-structured-data file is saidified, is its metadata also hashed along with its content? — I suppose it is a yes. • When the SAID is embedded in the filename “q2financials-E2948372a9b9019.docx”, if the filename is changed to, e.g., “report-E2948372a9b9019.docx”, is the SAID still valid? — My understanding is that the SAID must be recomputed if the filename is changed. • If I understand correctly, the metadata of a file changes when you use it in different operating systems. Would this cause a problem with the saidification if the files must be used in a multi-OS environment?
I find it hard to understand the process of delegation so please correct me if I'm wrong (also please let me know if I'm correct): 1. Alice incepts an identifier 2. Bob incepts an identifier 3. Now Alice generates a delegation even that delegates signing authority over her identifier to Bob's identifier 4. Now Bob can sign messages with his identifier on behalf of Alice
nuttawut.kongsuwan
Additional details: • In step 2, Bob has to incept his AID in the delegation mode, i.e. specifying Alice’s AID inside his inception event. • In step 3, Alice anchors Bob’s AID in her interaction event.
I see, thank you! In other words, there is no way for bob to decide to be a delegate after he incepted. He has to decide that in the inception event.
nuttawut.kongsuwan
That’s correct.
Is there an ACDC call today? Perhaps not given the July 4th holiday?
Yep, that’s it
Thanks Henk
We posted in the toip acdc slack
Apologies
<@U05DT8YPEG1> <@U02N7K951DW> raised your question on the dev call here:
<@U024KC347B4> thank you for taking the time and mentioning me. <@U02N7K951DW> I am very thankful for you, bringing this topic up in the dev call, I watched the recording and the answers indeed helped a lot! :slightly_smiling_face:
KERISSE to the rescue!! We needed to add a term to the vLEI EGF Glossary and it was simple to use KERISSE to find the definition in the ToIP glossary and copy and paste it
rodolfo.miranda
I'm fan #1 of KERISSE !!!
Now we just need to add "ample" to its list of definitions :joy:
These are the next repo’s we’ll scraoe & index for KERISSE. Criteria: not in search engine yet & is code repo & changed less than a year ago. Special attention for the vLEI reoo, it lacks a main branch, therefore we’ll search through the dev branch.
Screenshot 2023-07-16 at 14.30.17.png
It’s going to be really cool to add AI-powered search into KERISSE, maybe it already is…
Time for a T-shirt :joy:
kerisse.png
rodolfo.miranda
size M for me
Can anyone point me to the documentation of TEL? I was not able to find anything in the IETF drafts and the whitepaper. For example, I would like to know how it actually looks like and how it is anchored in a key event log.
rodolfo.miranda
that's the best I found:
Ah. Yes, I just found that. Thank you ! :slightly_smiling_face:
joseph.l.hunsaker
Vitalik on “Proof of Personhood”:
joseph.l.hunsaker
or “_Proof of Humanity”_
joseph.l.hunsaker
I like the social-graph-based proof-of-personhood idea
It’s a great idea. And simple to implement in KERI&ACDC.
Sounds like a talk for the fall IIW: *State of De-Identification - why selective disclosure is not silver bullet*
Where can I find the KERI Dev (Thursday) meeting recordings and/or notes? (I found but it looks like it’s just for Tuesdays.)
charles.lanahan
hmm I see it in the slack search results but history doesn't go back that far I'm getting this error when I run the tests, but the only change I made to the base repository is a spelling fix
============================================== short test summary info ===============================================
FAILED tests/app/cli/test_kli_commands.py::test_standalone_kli_commands - ValueError: Already incepted pre=BI81UmEUu6Vrii26PxQagwdkWJzJm3Q6PERtUw1c_y9K.
FAILED tests/app/cli/test_kli_commands.py::test_incept_and_rotate_opts - ValueError: Already incepted pre=DNgNWvmgmLbFuS_fuhIsLCc7IpkPKxSfmRC6_Tq7yQQZ.
=============================== 2 failed, 270 passed, 4 warnings in 141.15s (0:02:21) ================================
I was mucking around with the demo scripts beforehand though.  Does anyone know what I might be doing wrong.  Here's a draft PR if that helps clarify.  
No one is currently paying for enterprise slack so we just recently lost history past 30 days, I think
charles.lanahan
yeah, I guess I do have this directory, does state live here that the tests might interact with?
keri) daidoji@darkstar:~/ssi/keripy$ git status
On branch development
Your branch is ahead of 'origin/development' by 1 commit.
  (use "git push" to publish your local commits)

Untracked files:
  (use "git add <file>..." to include in what will be committed)
        scripts/demo/scripts/

nothing added to commit but untracked files present (use "git add" to track)
charles.lanahan
hmm blowing that away did not fix the issue. Is it a known one?
The cause is that you are reusing a salt to recreate an AID in an existing database. The tests all pass in CI and, for me, locally.
charles.lanahan
is it possible I did that and didn't realize it? I've mostly just been playing around with kli commands and the demo scripts?
If the tests aren't using `temp=True` then the database will be created in either `/usr/local/var/keri` or in `~/.keri` depending on your permissions to `/usr/local/var`
charles.lanahan
ahh, passed as a shell variable or some flag?
In the tests themselves. There are a few tests that are more like integration tests that don't use `temp=true` because they open and close the same database. If you delete any database files in those places it should fix this problem.
charles.lanahan
oh okay. Thanks
charles.lanahan
<@U024CJMG22J>, that totally worked thanks. Do you have scripts locally to blow away `~/.keri` when you're testing? I found that removing it had the tests pass once but then subsequent tests would fail as above. Is this just expected behavior?
charles.lanahan
like running `pytest tests/ --ignore tests/demo` twice would pass the first time and fail the second.
I always just run `pytest` from the base directory and can run it multiple times without having to delete anything
charles.lanahan
oh strange. I am running it from the base directory too. I wonder what's different with my setup? Oh well thanks for clarifying.
I have write access to `/usr/local/var/keri` and it is possible that some tests specifically only delete that directory and not `~/.keri`
The CI/CD on GitHub also uses `/usr/local/var/keri`
charles.lanahan
ahh maybe that's it. My user doesn't have that write access and so writes to `~/.keri`
We should not let this happen. I’m willing to pay for the cheapest Slack version that enables us to view the history again. From what I’ve learned from chatGPT it is possible to recover them:
Is it possible to recover history from Slack by buying a paid subscription ? ChatGPT As of my knowledge cutoff in September 2021, buying a paid subscription on Slack could indeed enable you to recover and access your team’s complete message history.
How can I pay for a Slack channel setup by someone else? ChatGPT The payment for a Slack workspace (a collection of channels, direct messages, and the people who belong to them) is typically managed by the workspace owner or an admin. If you want to pay for a Slack workspace that was set up by someone else, here are some general steps you could follow: 1. Contact the Workspace Owner/Admin: Reach out to the workspace owner or an admin to express your intention. They can change billing responsibilities if they agree. 2. Change Billing Ownership: If the current workspace owner or admin agrees, they can transfer ownership to you. This can be done through the workspace settings. Remember, this gives you control over the entire workspace, not just the payment, so it’s a significant responsibility. 3. Add a New Payment Method: Once you’re the billing owner, you can add a new payment method. Go to 4. S`ettings & administration` 5. > `Billing` > `Payment method`. 6. Update the Payment Information: Enter your payment information and make sure it’s set as the default. You will then be billed for the workspace subscription. Remember, it’s important to discuss these changes with the current workspace owner/admin and other relevant parties to ensure everyone is comfortable with the change. If you’re paying on behalf of a company, it’s also essential to make sure this complies with any relevant policies or procedures your company may have. Please note that this information is based on Slack’s policies and procedures as of September 2021, so it may have changed. Be sure to check Slack’s current documentation or contact their support for the most accurate and up-to-date information.
Thanks for the very generous offer Henk. Sam owns this account so I'll have to check with him tomorrow.
Hey everyone, for the next little while I have an hr (ish) daily to start contributing toward a JS/TS library if there's agreement that it's a positive contribution to make at this time. Given the right support in terms of understanding the spec and implementation I'm willing to put some time and effort here. As a start I began exploring possibility of strong compile-time typings for some of the primitives like identifiers. I quickly ran into TS limitations. For example it's possible to type for a 44 character base64 SAID but it fails with depth of recursion for 512 (88 char) identifiers. We could use Opaque types but it's a bit hacky with TS and you make some downstream trade-off. I wonder if it makes more sense to expose types best we can for consuming devs via JSDoc / TS definitions while leveraging a validation library like Joi for runtime checks instead of trying to have strict types at the library level. This would also make development more productive imo. Would love some thoughts on the above. Would also love to have someone to help orient me toward action assuming again that a TS/JS lib is a productive use of effort at this time.
rodolfo.miranda
That will be great!! We are focusing on signify and keria. We've been improving signify-ts in the last two months aimed to get the minimum set of features to implement a KERI/ACDC application. However we've been coding in a loose TS that should be improved to get all the advantages.
oh awesome I forgot about signify! I can start digging there it'll help me better understand things as a start.
I might be missing something. The signify client (as in user) is strongly bound to a cloud agent? What if the client aims to be offline first.
The purpose of Signify is to be a "signing at the edge" client for agents hosted in a KERIA instance.
If you are looking for a full KERI implementation in TypeScript, you'll have a lot of work ahead of you
Ahh got it so it's purpose fit for that scenario that makes sense.
If there's consensus that a full KERI TS implementation is needed at some point in time, the effort has to start at some point. I'm happy to kick it off in exchange for guidance.
There are several precursors that are required to start that. First, a choice would have to be made whether to start with the TS native implementations of the CESR primitives or the Rust versions from CESRide via WASM. Second, if TS is selected, those primitives would have to be extracted from Signify-TS into a CESR-TS library so they could be made available to both Signify-TS and KERI-TS. Finally, decisions regarding private key storage would have to be made which seems like a pretty significant can of worms that we haven't considered it yet.
I suggest attending the dev call this Thursday (link can be found in the subject of <#C055FTYLLBS|keri-dev>) and begin the conversation / discussions.
Will do. I'll do my homework on the points of considerations you've mentioned ahead of the call. Thanks <@U024CJMG22J>.
Where is the link for the KERI/ACDC meetings?
Thanks.
Won't be able to join today, walking in Scotland, hardly any Internet.
You win!
pics or it didn't happen
rodolfo.miranda
I like Kevin's idea to move to Discord that supports full history for free.
Or we just ask zulipchat if they would be open to host an instance for us. They do this for open source projects.
zulip is also much more accessible for developers
it's also open source so it suits our moral I think this matters a lot as an open source community.
it's actually free
If Discord works the Open Wallet Foundation and Hyperledger, both Linux Foundation orgs, it is good enough for me.
+1
andreialexandru98
Good enough for midjourney :shrug:
I strongly disagree but I respect your opinion. Discord is not open source, open source communities should support other open source communities. Additionally Discord is much harder to access, you have to be logged in to see the chats for example.
The rust community for example also decided to go with zulip for that reason.
<@U024KC347B4> I think that's a dangerous argument. Just because someone else uses something does not mean that it is good. We should make decisions based on *values*.
I don't want to be disrespectful with this! Just my thoughts and I would love to hear yours too, I am really just interested! "Going against total surveillance & control" This is from your bio. If this is true, I wonder why you support decisions like moving to Discord when there are working alternatives (in fact, an alternative that has proven to be much more efficient for open source software projects) that are much more in line with your statement.
Business models matter and have a huge influence on total surveillance & control.
I say we should think about pros and cons on our own instead of just following others.
Zulipchat Pros • Open source • Can be self-hosted • Topic based threading allow for efficient management of information • No need to sign in to view the messages (easier access for potential new community members) • Free for open source projects Cons • The interface might be new to some people Please add anything ... also I would be interested to hear pros and cons for Discord.
Thanks Leo, appreciate your feedback. A couple of points to consider and I am glad you pointed to the Rust move to Zulip, a couple of notes from Rusts own guide on Zulip (). > Zulip can be an unintuitive platform to get started with. > Like Rust itself, Zulip is a bit special and reading the documentation before digging can be really helpful. I'm also glad you brought up values, I agree that is extremely important. The KERI community is a diverse one, we have people from _literally_ all across the world and all walks of life they're not just developers and as you noted: > zulip is also much more accessible for developers We have lawyers, ux designers, musicians, philanthropists, a wide range of "non-technical" (I dislike this term but it works for now) people who contribute and want to be part of this community. Selecting a tool which would provide the easiest and broadest support to our diverse group is of extremely high value to me. Migrating to something with a familiar interface, server/workspace channel structure would maintain accessibility for the whole community and maybe even a tool they already use, not an additional one. I'd be incredibly sad to think an artist or product manager, felt they couldn't engage with the community because they did not feel "savvy" enough. As others may point out, this could all be moot anyway, this was the first mention and we there are no concrete plans in the community.
Yeah, I get your argument but to be honest, I don't think the interface too complex. It's actually intuitive and zulip heavily improved it in their updates this year. Also ... for product managers and similar people, I think the most important thing would be a simple to understand website.
And if easy access is the most important for you, then Discord has this big disadvantage of requiring to create a discord account before seeing the messages and since Discord requires 2FA this can be quite a long process and complex.
hear is an example instance, to play with.
Thanks for sharing that, it's on the Rust community page as well.
Also it's worth mentioning the tour guide on their homepage. It explains how Zulip works in comparison with Slack.
I’m not a big fan of Discord due to how they handle user data and privacy, which I think is what Leo is getting at. I’ve enjoyed Zulip, though really either work just fine. Just adding my two cents. The Discord community is a behemoth. I like the perspective of what will be best for adoption. Discord does seem more well adopted than Zulip, though Zulip seems better for managing threaded conversations.
And being on the same platform as Hyperledger and Open Wallet Cloud be nice.
I agree that open source communities supporting other open source communities is a good thing.
This is the winning argument for me: > Selecting a tool which would provide the easiest and broadest support to our diverse group is of extremely high value to me. > … > I’d be incredibly sad to think an artist or product manager, felt they couldn’t engage with the community because they did not feel “savvy” enough. Over the years I’ve become less of a purist and more of a pragmatist. As much as I like the ethos of Zulip I also want this tech to be adopted and to have a friendly, welcoming community. Zulip is friendly for developers, which I like, though not everyone does. Discord is just stupid simple.
I understandt this argument of convenience, especially for non-developers. I just don't think we should sacrifice our core values just because we assume that non-developers like journalist or lawyers are scared of using something like Zulip. In my opinion it's just not worth it, especially because it is an unverified assumption. As a community we can have huge influence and maybe in the future another community will say, oh hey look at the KERI community, they use Zulip why not use it as well? If this happens we have done something good.
> I just don’t think we should sacrifice our core values This statement seems a bit exaggerated. Using Discord wouldn’t require us to sacrifice our core values. We haven’t articulated any core values around data sharing and privacy just yet that conflict with what Discord does. And, really, we already live in a surveillance state. Not using Discord won’t stop large state actors from ingesting every piece of data they can get anyway, and the large data brokers from correlating all of that data. > As a community we can have huge influence I don’t think we’re in a place yet where we could benefit enough from the move to Zulip to justify the potential costs.
So the influence might not be worth it. Unverified assumptions on my part, I agree.
Hm ... I think that's a wrong assumption. If no-one would use encrypted messengers, there would be no incentive for developers to develop encrypted messengers, same for platforms like Zulip. With using it, we incentivize other people to build in that direction.
I like your tenacity. I’m really good with either option.
On the contrary, if everyone would use platforms like Discord, there would be no incentive for others to develop open source alternative. By choosing the right product, we can influence how the future will look.
Influence for a better future are always worth it, no matter how small. (in my opinion)
What's app is also much more adopted than Signal, still when it comes to the core values of decentralised identity, Signal just aligns more
Sorry, I was referring to core values of SSI. I just assumed that this community shares these values.
I think Zulip would not look much different from Slack. We would just have channels like <#C013LDPHYGL|general> <#C055FTYLLBS|keri-dev> and so on as streams and that's it.
For everyone playing around, just think of Streams as channels and just imagine that our zulip would have streams like general, keri-dev, keripy and so on (similar to the channels on slack)
amin.benmansour.10
~Values aside.~ Despite the fact that several communities are migrating towards Discord, I still think that is not the best fit for purposes other than gaming and does not bring the best UX. I just became aware of Zulip (Thanks <@U05DT8YPEG1> for pointing it out). It seems it does bring a similar user experience to Slack, which is so sleek but with additional features, and I believe it is a great candidate in case we are actually migrating.
rodolfo.miranda
as per the webpage, the hosted version is paid if you need history. It's not solving the problem
amin.benmansour.10
Not for open-source projects!
It's free for open source communities.
> I still think that is not the best fit for purposes other than gaming Angular, Ember, Scala, Rust, Elixir, Electron, Flutter, GraphQL, Jest, Redux, Tailwind and Vue.JS take a different view on the "not fit for purpose" arguement.
To be honest, I think they mostly did this because a few months ago Discord just got hyped for no particluar reason (maybe because it's free and was easy to access back then)
Or last year.
Yes, these organizations of massive influence and structure did it to be cool.
I relent.
I can't argue against that line of reason.
I have no insight into that.
amin.benmansour.10
> • Zulip supports mirroring channels with , , and , and you can connect to other modern chat protocols using .
amin.benmansour.10
We can replicate all messages to Zulip easily :tada:
Yeah but I don't think that's a good idea. It adds unnecessary overload.
We should just have a more or less democratic decision. If the majority really thinks Discord or some other non-open source chat platform is the best solution then we should go with that and not have multiple platforms connected to each other. After all, someone has to maintain it.
Zulip has a lot of "3rd Party Processing of data" language in their terms of service when you sign up for the cloud services. If we are worried about alignment with our "values" that would seem to matter a whole lot more than open vs closed source.
Yes, that's true. I am also not too happy about this but there is always the option to self-host it, and you can sign up anonymously which is not possible on Discord. Besides, conversations are public anyway.
Self hosting cost money
And resources to manage it
Right. I don't think we would do this but it's a nice option to have. Zulip sure is not perfect (e.g. they don't offer e2ee) but compared to the alternatives it's the best open-source solution on the market for communities. (ignoring cutting edge platforms that have not been tested a lot)
I think that zulip allows anonymous signups is quite important in that regard.
Discord collects your mobile phone number.
And even worse, if you used that number before to sign up, you need to buy a new number because you can use a number only once ..
Anonymity is not as big of a core value in this community as it is in other cryptography communities where zero knowledge proofs are king. The alternative is something like chain-link confidentiality, which uses the law, rather than fancy cryptography, to protect the data interests and rights of people. Maybe the KERI & ACDC community can get a custom contract with Discord that respects chain link confidentiality. Not sure how likely that is, though that would be a way to encode the core ethos and values of KERI & ACDC in our messaging tool of choice.
chain-link confidentiality through Ricardian Contracts.
I understand.
I hesitate to infer I speak for others and use phrases like “ethos and values” of this community because I only speak for myself.
I just mentioned it because in case someone is worried about these terms.
And to be honest I think Discord has the same terms and as they collect your personal number they sure collect more personal data.
I like you <@U03EUG009MY>. That's a creative solution!
But to be honest I don't think it is very likely that Discord would do this.
Why not fork Zulip and add support for signing up with an ACDC held by a KERI AID
Exactly. That would be possible and I don't think it would be too hard. With Discord however we could not do such things because it is not open source.
Seems like we have a path forward, thanks guys for working through this, I can't wait for the updates!!
Forking it would be fun. From the Zulip : > By default, Zulip allows logging in via email/password as well as various social authentication providers like Google, GitHub, GitLab, and Apple. You can restrict users to logging in via only a subset of these methods. > LDAP and various custom SSO login methods are currently restricted to self-hosted Zulip organizations only. > … > There are several possible roles in a Zulip organization. > • Organization owner > • Organization administrator > • Moderator > • Member > • Guest > • Billing Administrator It would be a fun challenge. Looks like it’s primarily written in Python and JavaScript. Common languages for this community:
image13.png
nuttawut.kongsuwan
I think <@U05DT8YPEG1> raised an interesting point on the value alignment between KERI/ACDC and SSI. I cannot speak for others in the community about what they think the values of KERI/ACDC and the community are, but for me, KERI/ACDC represents something more foundational than SSI. One can use KERI/ACDC to build either SSI or non-SSI applications/ecosystems. From the technical standpoint, the core value of KERI/ACDC is in “Authenticity”, that is, to build something with strong secure attribution. Other properties like privacy, confidentiality, and anonymity are important too, but they are not the top priority. This is quite different from the core value of SSI which emphasizes self-sovereignty, control, and ownership of personal data for individuals. However, the technical value may or may not reflect the values of members of the community. This might be a good philosophical discussion for the community.
robert.mitwicki
I can share same view from perspective of Human Colossus that protocols behind KERI and ACDC address low level architecture of authenticity in general. We have many applications where those protocols have nothing to do with SSI. It does not mean that you cannot build SSI application but we cannot ignore the existing world which is not SSI driven and won't be for very long time. In our case we leveraging properties of KERI to provide integrity (SAID) and authenticity (SCID) to lower layers of dynamic data economy stack, You can read more about that in this blog post: and in referenced in that blog post document about DDE not many people understand bigger picture of accurate data
robert.mitwicki
I can recommend Matrix with Element as good alternative to Discord we are using it for our community communication. It is not as good as other tools but does the job. At HCF we put a lot of efforts into open source communities, building our own as well, we try to support those projects which share same values but I can tell is not always possible or easy as it requires a lot of effort. But if we won't put that extra effort who else would do it? The power of the open source is the community, if communities would not support each other it would end as simple marketing tool of big tech companies.
Matrix is cool, I agree. I am only a bit afraid that it will not be as accessible as zulip. What do you think about this concern?
What I mean with accesibility is that you don't have to create an account to see the public messages.
As it is in a forum.
robert.mitwicki
Unfortunately that is true, accessibility is generally hard problem to solve but I don't think we need to worry about that since keri and low level technology is not for many, if we will be able to have 10 core developers over next few years that would be huge success. For such group either tool would work. The rest relevant information can be publish in form of website or documentation, or LLM ;)
robert.mitwicki
Spoiler alert, we are preparing new communication tools based on HCF stack, maybe we could use them in the future ;) They would solve all mentioned problems and would be amazing :star-struck:.
Sounds interesting!
robert.mitwicki
For those who are interested in bigger picture of the dynamic data economy, recommend recent publication from HCF: on Distributed Governance: a Principal-Agent Approach to Data Governance. Would love to hear your feedback!
daniel.hardman
Hi, <@U04H17ZEX9R>. Believe it or not, I just noticed this now. Sorry to be 2 months late. Wow. Yes, metadata stored *inside* the file is hashed along with content. But metadata stored *outside* the file (e.g., last mod date, permissions) is NOT hashed. When SAID is embedded in filename, the SAID does *not* have to be recomputed when the filename changes. The filename is not input to the hash function. Rather, the regex for the filename that's stored inside the file is input to the hash -- and that regex doesn't change when the filename changes. For example, if the regex is `.*{said}\.docx`, then any filename containing the SAID and ending in docx is a valid filename, and nothing needs to be recomputed when the file is renamed. Re. multi-OS: no, this metadata is not hashed and would not cause a problem.
nuttawut.kongsuwan
Thank you so much! I really appreciate the answer.
rodolfo.miranda
“However, we feel that achieving interoperability of VCs that are insecure, reliant on blockchain or web infrastructure, and not stress tested in production at scale is a hollow victory and premature; it is not meaningful interoperability. Security — in the form of proper key management — must come first, it cannot be bolted onto the current VC model just like it cannot be bolted onto the internet after the fact. There is no way around this.” I consider the whole article a celebration of KERI-suite-powered credentials! Great to read.
Ha, that is an amazing quote. I’m going to read the article now. Thanks Henk.
Hi everyone, I took the liberty of creating a (also , or ) with the weekly meeting call events and latest Zoom links. If you'd like to see something added or updated, please let me know! Edit: this calendar is now also viewable from !
I also have an that would embed this calendar at `` for easy reference, if that would be of any interest/utility.
Just a note, the Tuesday meetings are not WoT, they are ToIP and are managed on the ToIP calendar. People should have signed the contributor agreement to attend the Tuesday meetings.
Oh, good clarification! I'll update the event, and link to the ToIP Wiki page instead
Updated:
image12.png
perfect man, thank you for doing this
did you update that PR?
I hadn't changed it since it was created, was there a change you were looking for? It's the same calendar link/embed
Another good one: “Stop! Before Issuing a Single OC… A tree is only as secure as its root. This is why the issuance of an organization’s first, root-level credentials must be done with utmost care, with planning, preparation, and above all, security.”
nuttawut.kongsuwan
charles.lanahan
I wonder what the motivation for this question is, it doesn't really make sense in the light of the other two in the set. Does anyone remember what the context was? Its kinda confusing.
charles.lanahan
What systems allowed shared control over identifiers? I couldn't think of an example.
charles.lanahan
Is there an example of a smart contract defined using KERI? Outside of selective disclosure and the oft-mentioned "ricardian contract" which are special cases of smart contracts I'm struggling to imagine constructing a smart contract with keri structures.
I’m not aware of any smart contract platform currently with KERI support built in though what makes sense to me is something that is EUTXO style like the Cardano blockchain where the inputs are signed as an ACDC and the outputs are signed as an ACDC. The smart contract itself could use an in-memory keypair to sign things, sort of like SignifyTS does.
charles.lanahan
hmm, maybe I'm not creative enough but I don't quite understand. Maybe it also has to do with the fuzziness of "smart contracts" and what that means. Like if I had a (even for the simple case) finite state machine script I wanted to run and you and I were running it in a trust-less context where you didn't trust nodes under my control and I didn't necessarily trust nodes under your control. How would that execute? Like would we go to third party watchers/witnesses with extra functionality? Or is that what judges and jurors are for in the future? Or is there some other mechanism I'm just having trouble thinking of?
charles.lanahan
I will read up on the cardano model though as its been a while.
The chatBot already is, and we’re looking into options for offering different levels of understanding.
charles.lanahan
oh one answer is ethereum through some kind of multi-sig dance I'd never heard of before.
This message was deleted.
Yes, there are several circumstances where giving a controller only signing authority is useful.
... or even only rotation authority
Yeah, I would need only rotation authority. Meaning that I have one keypair in the aid multisig that can rotate other keypairs into the aid.
I think it's still unclear what I ask for, so let me rephrase my question. User A has only one keypair. He wants to use this keypair to only add other users to his AID. He also does not want to store two keypairs, one for current keys and one for next keys, he only has this one keypair. Now .. is it possible to generate a multisig AID, where one of the controllers is that User A which has only one keypair (aka. no next keypair)?
andreialexandru98
andreialexandru98
I think this is what you are looking for
Yeah I am aware of this but I need this only for one controller and then the rest is normal multisig.
  "kt": "1",
  "k" :
    [
      "DnmwyZ-i0H3ULvad8JZAoTNZaU6JR2YAfSVPzh5CMzS6b",
      "DZaU6JR2nmwyZ-VPzhzSslkie8c8TNZaU6J6bVPzhzS6b",
    ],
  "nt": "1",
  "n" :
    [
      0
      "ETNZH3ULvYawyZ-i0d8JZU6JR2nmAoAfSVPzhzS6b5CM",
    ],
So here the first keypair is the keypair I was talking about and the idea is that this particular keypair has no next keypair. This is just to avoid needing to generate a next keypair and store it.
I also never plan to rotate that first keypair so there is no need for a next keypair. It just might happen that this keypair is removed one day but never rotated.
Or is the solution just to do soemthing like this:

  "kt": "1",
  "k" :
    [
      "DnmwyZ-i0H3ULvad8JZAoTNZaU6JR2YAfSVPzh5CMzS6b",
      "DZaU6JR2nmwyZ-VPzhzSslkie8c8TNZaU6J6bVPzhzS6b",
    ],
  "nt": ["0", "1"],
  "n" :
    [
      "ETNZH3ULvYawyZ-i0d8JZU6JR2nmAoAfSVPzhzS6b5CM"     <- random key with 0 rotation rights and is not stored
      "ETNZH3ULvYawyZ-i0d8JZU6JR2nmAoAfSVPzhzS6b5CM",
    ],
the keypairs are independent across generations, you don't need to specify a zero key, just specify your single key
i think this may be the misunderstanding
you don't, for instance, need to have the same number of next keys as current keys
it will just be invalidated at rotation, both the old keys
Ah .. I see. But I thought they need to be because they are mapped. Like current KEY1 can be rotated only by next KEY1.
no, each event has an indexed array of sigs attached
and you use the ondex
to specify if the order differs
across gens
i believe
I see. So in my case I would just need something like this:
  "kt": "1",
  "k" :
    [
      "DnmwyZ-i0H3ULvad8JZAoTNZaU6JR2YAfSVPzh5CMzS6b",
      "DZaU6JR2nmwyZ-VPzhzSslkie8c8TNZaU6J6bVPzhzS6b",
    ],
  "nt": "1",
  "n" :
    [
      "ETNZH3ULvYawyZ-i0d8JZU6JR2nmAoAfSVPzhzS6b5CM",
    ],
yeah i think the prefixes are applicable to the whole identifier
And the attached signatures array specify that the next key can rotate the second current key.
to the transferable concept applies to the aid not the individual keys
yes
Okay.
i mean
technically it's moot right
what key was what previously
And in this case .. I can also still remove the first current key in a rotation event, right?
it will be invalid yes
both will
from the first generation
as soon as the rotation event promulgates
as Sam says :smiley:
and with thresholds you can break up which keys are used for rotation and which for signing
from the next key batch
just give the ones 0 that don't matter in the correct context
In regard to that. Was my example right? I have not found any example in the keri specs. so for threshold i would have `kt: "1"` and for fractional I would have `kt: ["0", "1"]`
i know of an example
just a moment
that and the next two sections
read custodial
that one makes it quite clear what I mean
Right, I have seen that but I meant a json example, so I was unsure.
ah; you should be able to generate one with that code i have hooked up to dilithium actually
i can do that if you want, it would be a fun exercise
should only take a few minutes
the test harness is set up to incept
You mean producing an inception event with fractional thresholds?
yeah and rotation
then printing the results
That would be great, although I can already think of how it would work in the code. I believe there is also something similar in the partial inception code already.
yes
in fact it is not as trivial as i thought at first
i may not attempt this now
but i'd like to make the example in that repo
it's good to have
Okay, no worries. Thank you for your help answering this question!
The "Slack History" issue came up on the ACDC ToIP call today and I'd like to thank <@U02PA6UQ6BV> for continuing to work on this issue. We took a brief Zoom chat vote which indicated that Discord was a preferred path forward for now for "general" discussion, GitHub Discussions were mentioned for long lived topics or in depth implementation discussions (ideally with an associated issue/PR) Kent mentioned Zulip again as a tool for development discussions as well, which I think falls into our future plans. A migration to discord will allow us to keep the community together in a "social" manner and allow us to begin to deactivate Slack accounts in order to reduce the cost of a subscription to try retrieve old discussions. I appreciate everyone's thoughtful comments and good intentions on the call and will post a link this week to a Discord server.
reduce the cost of a subscription to try retrieve old discussions -> It’s also a matter of principle. I am not amused with a software platform that holds back a vital part of its former service to lure you into a far too expensive subscription, based on an assessment of “active” users, who are for the most part not really active. If we can buy our way out for a few hundred US$ (estimated cost based on the current user base was 430 US$ a month (!) ) then I’ll do it right away. Buying our way out means: We need to get to see our old history and threads with explanations by Phil, Kevin, Sam and other experts to populate Q&As in KERISSE and Github Discussion items; either by hand or with API calls. The history we currently see is only 60 or 90 days old.
Thank you, <@U024KC347B4>, as soon as the Discord server is up&running, I’ll use my manager user right in Slack, granted by Sam, to send inactive users an e-mail (out of band), point them to the new platform and unsubscribe them from the Slack user base. When the user base is small enough to have an acceptable exit amount to pay to Slack, I’ll proceed to buy a temporary subscription. And then pull all the history that’s still valuable out and structure it into KERISSE / Discussions, which are both Github-(Pages)based, so open source and no paywall traps (foreseen).
daniel.hardman
Discussion thread: how to efficiently watch witnesses
daniel.hardman
In Provenant's current command-line wallets, we run `kli local watch` every time we log in. This helps the different members of some of our multisig AIDs catch up if they've missed events since their last login. It works well. However, I'd like an efficient way to do some more ambitious watching than this -- and that's what this thread is about. Thing 1 that I'd like to do: `register a webhook with a witness so I can be called back any time ANY KEL/TEL changes` Thing 2 that I'd like to do: `poll a witness efficiently for all changes in any KEL/TEL since timestamp or state hash X` I am wondering if either of these things is already supported. If not, <@U024CJMG22J> or <@U024KC347B4>, do you think would the community accept a PR from Provenant adding either or both of these features? Unlike the `kli local watch` command, the most convenient scope of both of these features would be "all the AIDs the witness services" rather than an enumerated subset of AIDs. Regarding #1: I don't imagine that registering the webhook has to be exposed to the public over a web service endpoint; I'm rather imagining this as something configured internally by the owner/operator of a witness. Provenant would use it on its own witnesses, because it would make it very easy to discover that a credential has been issued or revoked, or a key has been rotated. Regarding #2: I'm imagining this would happen over a web service endpoint exposed by the witness, and that it could potentially be called by the public (albeit with optional enforcement of some kind of proof of authorization). Provenant would love to call GLEIF's witnesses periodically over an API like this, making sure that we find out about any noteworthy changes rooted in those witnesses. With this feature, if a scope of "all the AIDs the witness services" is deemed too broad, it could still be useful for the smaller scope of an enumerated set of AIDs. It is possible that this second feature is the logical inverse of the witness gossip feature that's been long imagined (pull vs. push), and that witness gossip could be implemented atop it (or at least with a lot of common code). Provenant doesn't need these features in the next 6 weeks, but we probably do need them before the end of the year, so if it's something we need to work on, I'd love to be taught/pointed in the right direction and start the journey. If someone has already implemented part or all of this, or is in the process of doing so, so much the better.
There is no support for callbacks coming out of witnesses. That would be a net new feature. There is a fairly extensive query API exposed from witnesses (as a matter of fact, `kli local watch` uses it) to support query for current state of an AID, current log of an AID and with parameters like, `sn=?` and `anchor={...}` which will live in the witness until that condition is met or until a timeout is reached. All of these work using a KERI `qry` event and when satisfied, the witness sends back a `rpy` event, so they are asynchronous, not an HTTP interactive API. Also, they all work on only one AID at a time. If you want to see how it all works, you should start in `Kevery.processQuery` in `eventing.py` in KERIpy. That is called from the parser when a `qry` event is streamed in.
The discord link for the community: Please bear with us as we attempt this transition.
rodolfo.miranda
I can send some Nitros!!!!
rodolfo.miranda
<@U02PA6UQ6BV>, I found this Discord bot that parses JSON chat history file exported from Slack and posts them into Discord:
What’s the academic paper Sam referenced that shows de-identification is trivial with the right number of data attributes? I’m having a debate with someone on the usefulness of differential privacy.
I think it was recorded in one of the ToIP/Dev meetings...
(in the written notes)
THanks
Found it:
from 2023-07-25 in the
nice!
charles.lanahan
is there an example of a metadata ACDC disclosure as described here in one of the scripts?
charles.lanahan
or maybe a concrete example somewhere of what this would look like?
I think <@U056TG1AB0B> was working on implementing IPEX which iirc involves using metadata acdcs for part of the exchange
But the PR was very fresh
charles.lanahan
oh okay, I haven't gotten to that spec yet so I'll go there next.
charles.lanahan
thanks
:+1:
I could be incorrect but that's what I recall :smiley:
charles.lanahan
haha no worries. I was just having trouble following this section that I felt like a few ACDC examples would clear me up.
<@U056E1W01K4> is right that a PR was merged today to provide support for all the IPEX verbs. But nothing in there, _yet_ for selective disclosure or metadata ACDCs. Stayed tuned over the next few days as there will be soon. That's where we are headed next
Re-Announce the Discord server, invite link here:
Seems like most active folks migrated with minimal issue.
I am wondering if maybe <@U02PA6UQ6BV> can start to deactivate the remainder of accounts.
I am happy for mine to be deactivated.
Mine too
rodolfo.miranda
same
Yup, cut me loose!
I’m in the process of migrating the history to Discord using Slackord. However there’s a bug in this software on MacOS and I’m waiting for the devs to repair it. Interestingly, I’ve seemed to have managed to save the whole history since 2020 in json files, without having to buy a membership. Nobody has been able to tell me about this option upfront, but anyways: I first want to see the result before I start cheering. Slackord is mission critical at the moment.
daniel.hardman
Can I get a new Discord invite? The one Phil pasted above is invalid.
I guess they expire?
daniel.hardman
I've spent some time studying. My unfamiliarity with the codebase is a handicap, but I made some progress. Let me clarify why I want this. Provenant imagines supporting high-speed signing for millions of brands. These brands will all have a multisig AID, and they will delegate signing authority to a single-sig AID that can be operated by automation at machine speed. Signing an SMS message or similar does not generate an ACDC and is not anchored to the KEL, because the signatures are only relevant for a short lifecycle (e.g., 10 seconds). All of these brands will want to be able to revoke delegated signing authority with low latency (e.g., a few seconds of reaction time). This means that Provenant needs to know if delegated authority was changed on millions of AIDs, within a few seconds of when it happens. I don't believe it will be practical to monitor individual KELs and poll for changes. The good news is, 99.9% of these brands should be using Provenant witnesses. If we configure our own witnesses to tell us when the KEL of an AID changes (the new feature I want, not callable over public web interface), we can then call the witness back and get an updated version of the KEL (existing witness feature), very soon after the change occurs. Since most KELs are unlikely to change frequently, we will have the efficiency we need. It seems to me that what I want might best be hooked up to KeyStateNoticer and WitnessInquisitor in some way. Am I in the right ballpark?
daniel.hardman
Actually, maybe processEvent in Kevery?
Yes, they do
At most a week or eternally valid, the latter, I think, is not the one to go for.
We should consider shutting this down now that we've migrated to Discord. Avoid confusion.
fully agree, permanently delete a workspace is easier than to suspend it first (a few maintenance actions have to be executed in the latter case). It’s Sam who has to perform the permanent deletion. The problem of permanent deletion is that we do have the text of all history, but not the vids / pcitures uploaded to historical conversations yet. We’re still waiting for a Third Party Tool to update and debug their Apple Mac tool for me to be able to look what they’ve got to do so. There’s also this open , where <@U024KC347B4> offered to use his MS Windows machine to perform the history scrape (that does not have the Slackord bug). We’ll lose pcitures / vids if we delete the workspace permanently before we’ll get the attachments to conversations out. It would be a choice to do so.
From my research (using chatGPT): To download images and videos from a Slack workspace, which are not included in the standard workspace history export, you can follow these steps: 1. Use Slack’s API: Slack’s API provides endpoints for accessing files uploaded to your workspace. You can use methods like `files.list` to retrieve a list of file URLs. This requires some programming knowledge and an understanding of working with APIs. 2. Manual Download: If there aren’t many files or if you’re looking for specific ones, you can manually download them. Go to the conversation or channel where the file was shared and click on the file to download it. 3. Third-party Tools: There are third-party tools and services available that can help you download all files from your Slack workspace. These tools usually integrate with Slack and automate the process of downloading files. Be sure to choose a reputable and secure service, as this involves sensitive data. 4. Request Slack Support: If you’re facing difficulties or if your workspace has a large number of files, consider contacting Slack support for assistance. They may be able to provide a solution tailored to your specific needs. 5. Automate with a Script: If you have programming skills, you can write a script using Slack’s API to automate the download process. This could be especially useful if there are a large number of files. Even though I could go for option 2, I’m not interested to manually reconstruct it this way.
The text-only history is available here: (by <@U04R6UFGEJK>)
UPDATE: I’ve just purchased a business account+ to try and get the valuable stuff out. I deactivated everyone, expect for: • Phil • Kevin • Sam • Henk IMPORTANT for the well-being of my budget: As an administrator please do not accept invitation requests anymore!