Slack Archive

To Index

Back To Top
<@U04HQD29Z7E> has joined the channel
I created this set of messages using the Qui SDK atop `cesride`:
{"v":"KERI10JSON0001e7_","t":"icp","d":"ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ","i":"ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ","s":"0","kt":"2","k":["DNGW8-mK9gJUdIe75iZDopMKOyZTvwoM7nfOHFKzl2Xf","DI_VCGvq9G0p-Hko1h3-1Jxy60oCqGZpgHMO-rpRHTr7","DDZGak_BRJueyzxrXBejsDqtAgGhriN9lG-HZb9C4DIE"],"nt":"2","n":["EGNk3LJHkox4iGsj2MAu1OGZZAbfj4UuNA-4txCyH5rS","EBlimjpkreHaz3Y7D5taJhNYFK9FVkwSNaMwHu7Ew2VQ","EDAuXsx-4R6JgDR_dytf-qXRN_3Ei34JOfyq9Ffs1Uiw"],"bt":"0","b":[],"c":[],"a":[]}-VBD-AADAADh6JRLaiG6AYurdeqnfC2lxbAbDh4G3srBB_ZNqNDbXYmVy57BnhtSLAmNLA_8GnbJh2IDlYXwUs98irSK7WsBABCQZ9w6BWkqgoFi3GnfoRNYkzZMrXYDU4eWypSVWD9U0ZCG_nkEDg5EYwj4MldsW8TX3N75yQzMxTpPE7bJ5BINACB618hVBrLxxvc_GCRv4F5-1tXMPqpW4HDZ9NQtv4WyjRrxTwib4myP8yVVmGInQA0M82LRf00M7Kln4jcXiPIH
{"v":"KERI10JSON00013a_","t":"ixn","d":"ELaw_rkRuXpU21iEnXYVSj5fwxde6iUJwpH9s53J9raL","i":"ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ","s":"1","p":"ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ","a":[{"i":"EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL","s":"0","d":"EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL"}]}-VBD-AADAADjzCtrIFlFALjbK5bsL9yF8HEJHBKJcSvBrrDuSGN77iUDDub_qv1GefpjAJ_uWbjAx11uwObDMOGFWDeOLN0HABAYlHYccKZWlMgrL7OhA9Qk_N-eR0hFc3dRgU1DV5bLf1s8gvjWynsJJW1LsPpyJomjvFHurI18bn8Dn9CJjtkCACDZ6jQcCv31nv09OUd4IqgXfpeOAqDoT_LtuxulOurk8URxL6W9NJvMIGnACrjXs3psaHDjUMwK8vlR9rouICgH
{"v":"KERI10JSON00013a_","t":"ixn","d":"ED27Rx7illu2NyaPzjrTmDLMrJ6AnqzZhGWZ7nd3UytE","i":"ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ","s":"2","p":"ELaw_rkRuXpU21iEnXYVSj5fwxde6iUJwpH9s53J9raL","a":[{"i":"EAj80gram4PjVcpLdMBil2aPxOhUAU63T1qOiizpsOZt","s":"0","d":"EEC4SA2Sk0ViIfcIZGfeeIrLYnerJaOid_7n1sTZjyLB"}]}-VBD-AADAACXdIIUQvHxt2KAVeDZcLfENl2AZsQlAimjjW4Mf_XiM4VRiFXSuWZZ45unMiDqpXShDcqNZzJZyMxVcFSpMwcFABBbi-72kgciB5C7YdSEhGPGypLXufq_sHldpM5DcdJC98R9hK00YC0YaQB2mzRlkvnbl6_xOEHa4MCuYFEhW6IOACBkJ19npxT4GcKE80MsL9KNQ9C4bgQNIwX46LrgWUNdXa-CC-9Km7IcVWmvAfiBcGesx8c0uh4iIivLhDIu9OsN
{"v":"KERI10JSON0000e0_","t":"vcp","d":"EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL","i":"EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL","ii":"ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ","s":"0","c":["NB"],"bt":"0","b":[]}-GAB0AAAAAAAAAAAAAAAAAAAAAABELaw_rkRuXpU21iEnXYVSj5fwxde6iUJwpH9s53J9raL
{"v":"KERI10JSON0000ed_","t":"iss","d":"EEC4SA2Sk0ViIfcIZGfeeIrLYnerJaOid_7n1sTZjyLB","i":"EAj80gram4PjVcpLdMBil2aPxOhUAU63T1qOiizpsOZt","s":"0","ri":"EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL","dt":"2023-04-25T06:47:26.465245+00:00"}-GAB0AAAAAAAAAAAAAAAAAAAAAACED27Rx7illu2NyaPzjrTmDLMrJ6AnqzZhGWZ7nd3UytE
{"v":"ACDC10JSON000236_","d":"EAj80gram4PjVcpLdMBil2aPxOhUAU63T1qOiizpsOZt","i":"ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ","ri":"EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL","s":"EGCb76QtyhQzCkZZLfvx-3vz4t5zy1z-OMJd_OOR5a6z","a":{"d":"EEzx73axJqvdhimViG3vLQrl1N4rgop4aTkazBTyNxme","i":"EBCIvVwH0l0bNXGZvIIGoEQohdKi3Vz00zTl0axkiF9M","dt":"2023-04-25T06:47:26.465139+00:00","holders":["Arthur S.","David M.","Ilkin N.","Peter M."],"shortDescription":"Qui devs are rockstars!","headerTitle":"We are rockstars!","logo":""}}-VBj-JAB6AABAAA--FABECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ0AAAAAAAAAAAAAAAAAAAAAAAECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ-AADAAC-h7hcsHYV3XWJfLiF_OWdHBCSMUmEmUwb2Irx-RaXjqbSCNJDJKMvFo1k6__luLaiyUJMnIJDT-usG8IQGA8NABB6b1_RU5WAXJ9ro6MqITHC7XXTuyG0YbafwuC6BL1BpcsN9qoO9lxiKE8N1s6gQtNDxvdn4CvjGL61RdGloT8DACBUyT0OdU90D18RT1SZlicfm4bQkblJpiwynQHBZSRBU9BKDBBdSXeyz4snpTzcNZokeGjZeNUwDHuQNR-Ibm8B
The ACDC should be valid, it worked for me:

❯ rm -rf ~/.keri
❯ kli init -n test --nopasscode                                                                         
KERI Keystore created at: /Users/jason/.keri/ks/test
KERI Database created at: /Users/jason/.keri/db/test
KERI Credential Store created at: /Users/jason/.keri/reg/test
❯ kli incept -n test -a test --transferable true --icount 1 --isith "1" --ncount 1 --nsith "1" --toad "0"
Prefix  EBCIvVwH0l0bNXGZvIIGoEQohdKi3Vz00zTl0axkiF9M
	Public key 1:  DM4zp1KO_mTs17GnepSFL2_8AqXTtShMchtHG1D-MOlT

❯ python test.py                                                                                         
keri: Kever state: b'ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ' First seen ordinal 0 at 2023-04-25T06:48:12.979027+00:00
Event=
{
 "v": "KERI10JSON0001e7_",
 "t": "icp",
 "d": "ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ",
 "i": "ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ",
 "s": "0",
 "kt": "2",
 "k": [
  "DNGW8-mK9gJUdIe75iZDopMKOyZTvwoM7nfOHFKzl2Xf",
  "DI_VCGvq9G0p-Hko1h3-1Jxy60oCqGZpgHMO-rpRHTr7",
  "DDZGak_BRJueyzxrXBejsDqtAgGhriN9lG-HZb9C4DIE"
 ],
 "nt": "2",
 "n": [
  "EGNk3LJHkox4iGsj2MAu1OGZZAbfj4UuNA-4txCyH5rS",
  "EBlimjpkreHaz3Y7D5taJhNYFK9FVkwSNaMwHu7Ew2VQ",
  "EDAuXsx-4R6JgDR_dytf-qXRN_3Ei34JOfyq9Ffs1Uiw"
 ],
 "bt": "0",
 "b": [],
 "c": [],
 "a": []
}

keri: Kever state: b'ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ' Added to KEL valid event=
{
 "v": "KERI10JSON0001e7_",
 "t": "icp",
 "d": "ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ",
 "i": "ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ",
 "s": "0",
 "kt": "2",
 "k": [
  "DNGW8-mK9gJUdIe75iZDopMKOyZTvwoM7nfOHFKzl2Xf",
  "DI_VCGvq9G0p-Hko1h3-1Jxy60oCqGZpgHMO-rpRHTr7",
  "DDZGak_BRJueyzxrXBejsDqtAgGhriN9lG-HZb9C4DIE"
 ],
 "nt": "2",
 "n": [
  "EGNk3LJHkox4iGsj2MAu1OGZZAbfj4UuNA-4txCyH5rS",
  "EBlimjpkreHaz3Y7D5taJhNYFK9FVkwSNaMwHu7Ew2VQ",
  "EDAuXsx-4R6JgDR_dytf-qXRN_3Ei34JOfyq9Ffs1Uiw"
 ],
 "bt": "0",
 "b": [],
 "c": [],
 "a": []
}

keri: Kever state: b'ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ' First seen ordinal 1 at 2023-04-25T06:48:12.981352+00:00
Event=
{
 "v": "KERI10JSON00013a_",
 "t": "ixn",
 "d": "ELaw_rkRuXpU21iEnXYVSj5fwxde6iUJwpH9s53J9raL",
 "i": "ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ",
 "s": "1",
 "p": "ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ",
 "a": [
  {
   "i": "EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL",
   "s": "0",
   "d": "EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL"
  }
 ]
}

keri: Kever state: b'ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ' Added to KEL valid event=
{
 "v": "KERI10JSON00013a_",
 "t": "ixn",
 "d": "ELaw_rkRuXpU21iEnXYVSj5fwxde6iUJwpH9s53J9raL",
 "i": "ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ",
 "s": "1",
 "p": "ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ",
 "a": [
  {
   "i": "EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL",
   "s": "0",
   "d": "EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL"
  }
 ]
}

keri: Kever state: b'ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ' First seen ordinal 2 at 2023-04-25T06:48:12.982529+00:00
Event=
{
 "v": "KERI10JSON00013a_",
 "t": "ixn",
 "d": "ED27Rx7illu2NyaPzjrTmDLMrJ6AnqzZhGWZ7nd3UytE",
 "i": "ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ",
 "s": "2",
 "p": "ELaw_rkRuXpU21iEnXYVSj5fwxde6iUJwpH9s53J9raL",
 "a": [
  {
   "i": "EAj80gram4PjVcpLdMBil2aPxOhUAU63T1qOiizpsOZt",
   "s": "0",
   "d": "EEC4SA2Sk0ViIfcIZGfeeIrLYnerJaOid_7n1sTZjyLB"
  }
 ]
}

keri: Kever state: b'ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ' Added to KEL valid event=
{
 "v": "KERI10JSON00013a_",
 "t": "ixn",
 "d": "ED27Rx7illu2NyaPzjrTmDLMrJ6AnqzZhGWZ7nd3UytE",
 "i": "ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ",
 "s": "2",
 "p": "ELaw_rkRuXpU21iEnXYVSj5fwxde6iUJwpH9s53J9raL",
 "a": [
  {
   "i": "EAj80gram4PjVcpLdMBil2aPxOhUAU63T1qOiizpsOZt",
   "s": "0",
   "d": "EEC4SA2Sk0ViIfcIZGfeeIrLYnerJaOid_7n1sTZjyLB"
  }
 ]
}

keri: Tever state: b'EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL' Added to TEL valid event=
{
 "v": "KERI10JSON0000e0_",
 "t": "vcp",
 "d": "EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL",
 "i": "EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL",
 "ii": "ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ",
 "s": "0",
 "c": [
  "NB"
 ],
 "bt": "0",
 "b": []
}

keri: Tever state: b'EAj80gram4PjVcpLdMBil2aPxOhUAU63T1qOiizpsOZt' Added to TEL valid event=
{
 "v": "KERI10JSON0000ed_",
 "t": "iss",
 "d": "EEC4SA2Sk0ViIfcIZGfeeIrLYnerJaOid_7n1sTZjyLB",
 "i": "EAj80gram4PjVcpLdMBil2aPxOhUAU63T1qOiizpsOZt",
 "s": "0",
 "ri": "EHThFVgTY1anKaF7pvM3zrkpanrlJxGgdCUGpLx6iNKL",
 "dt": "2023-04-25T06:47:26.465245+00:00"
}

❯ kli vc list -n test
Current received credentials for test (EBCIvVwH0l0bNXGZvIIGoEQohdKi3Vz00zTl0axkiF9M):

Credential #1: EAj80gram4PjVcpLdMBil2aPxOhUAU63T1qOiizpsOZt
    Type: Shout Out Block
    Status: Issued ✔
    Issued by ECSkjQRsDs12z3pP916haJSLkp_hQh9g8YcO6BBVU4qJ
    Issued on 2023-04-25T06:47:26.465245+00:00
Here is the test code:

from  import habbing
from keri.vdr import (eventing, credentialing, verifying)

from keri import help

import logging

with habbing.openHab(name="test", temp=False) as (hby, hab):
    help.ogler.resetLevel(level=logging.DEBUG)
    
    rgy = credentialing.Regery(hby=hby, name='test', base='')
    vry = verifying.Verifier(hby=hby, reger=rgy.reger)
    vry.resolver.add('EGCb76QtyhQzCkZZLfvx-3vz4t5zy1z-OMJd_OOR5a6z', b'.. schema goes here ..')

    raw = (
        """
    .. messages go here ..
"""
    )

    hby.psr.vry = vry
    hby.psr.tvy = vry.tvy
    hby.psr.parse(bytearray(raw.replace('\n', '').encode("utf-8")))
<@U02PA6UQ6BV> has joined the channel
<@U02MD0HA7EJ> has joined the channel
petteri.stenius
<@U03U37DM125> has joined the channel
Here is a diagram of what's going on (I implemented the solid bits on the right):
ACDC Issuance and Revocation.png
I haven't double checked this so if you spot a mistake let me know!
<@U04UY973WQ5> has joined the channel
<@U035R1TFEET> has joined the channel
<@U03EUG009MY> has joined the channel
<@U04GUPCB1M4> has joined the channel
nuttawut.kongsuwan
<@U04H17ZEX9R> has joined the channel
I am not capable of commenting, not my depth of expertise yet
<@U024CJMG22J> has joined the channel
<@U04TFR4QU59> has joined the channel
<@U024KC347B4> has joined the channel
andreialexandru98
<@U0474LZ0ZLG> has joined the channel
andreialexandru98
Hey there, I am trying to run the demo script using the docker image on M1 but I am getting a `ERR: /usr/local/var/keri/ks/test: Function not implemented` when I try to run `source scripts/demo/basic/demo-script.sh` is there anything that I am missing while building the docker images?
It seems like someone else had this issue recently. It could be due to a breakage on the `development` branch in KERIpy, which can happen from time to time. I suggest checking out the 1.0.0 tag, installing with `python -m pip install -e ./` from the `keripy` root directory, and then trying that script again.
rodolfo.miranda
<@U03P53FCYB1> has joined the channel
daniel.andersson
<@U04DQE2G36F> has joined the channel
daniel.andersson
I have experienced the same issue. I've tried different scripts and receives the same error message, i.e: `ERR: /usr/local/var/keri/ks/wan: Function not implemented` When trying to run `kli witness demo &` Using Docker on M1 mac. Image tags: `latest, 1.0.0, 0.7.4, 0.7.3` I assume this is a system/architecture issue (Docker and M1). However, with tag `0.6.8` there's no issues. So I'm not sure how to approach this. <@U0474LZ0ZLG> Did you solve it?
<@U0474LZ0ZLG> and <@U04DQE2G36F> will you provide a set of reproduction steps? I’d like to reproduce this problem. Also, let’s file this in Github Issues.
There should be another `ri` going from the `rev` event to the `vcp` event, it's present on `iss` but missing on `rev`. <@U024CJMG22J> this is actually mis-documented I think - it looked to me like `ri` was not required (I thought maybe it wasn't really that useful since you were at most one step back from the `iss` event which contained `ri`) but KERIpy definitely wants the `ri` field. The nonces aren't mentioned either. Do you want me to create a GitHub issue?
I did wonder why `brv` had `ra` and `rev` didn't have `ri`
It shouldn't be needed because the `rev` event is linked to the `iss` event via the back pointer of the digest.
Right, that's what I assumed but the test code wouldn't process the event without it
I can run it again to be sure
Maybe it's the test code, but:
keri: Parser msg non-extraction error: 'ri'
Traceback (most recent call last):
  File "/Users/jason/qui/src/ssi-sdk-rs/build/venv/keripy/lib/python3.11/site-packages/keri/core/parsing.py", line 467, in allParsator
    done = yield from self.msgParsator(ims=ims,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jason/qui/src/ssi-sdk-rs/build/venv/keripy/lib/python3.11/site-packages/keri/core/parsing.py", line 1096, in msgParsator
    tvy.processEvent(serder=serder, seqner=seqner, saider=saider, wigers=wigers)
  File "/Users/jason/qui/src/ssi-sdk-rs/build/venv/keripy/lib/python3.11/site-packages/keri/vdr/eventing.py", line 1566, in processEvent
    regk = self.registryKey(serder)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jason/qui/src/ssi-sdk-rs/build/venv/keripy/lib/python3.11/site-packages/keri/vdr/eventing.py", line 2016, in registryKey
    return serder.ked["ri"]
           ~~~~~~~~~~^^^^^^
KeyError: 'ri'
Yeah that registry key function looks like this:
        if ilk in (Ilks.vcp, Ilks.vrt):
            return serder.pre
        elif ilk in (Ilks.iss, Ilks.rev):
            return serder.ked["ri"]
        elif ilk in (Ilks.bis, Ilks.brv):
            rega = serder.ked["ra"]
            return rega["i"]
        else:
            raise ValidationError("invalid ilk {} for tevery event = {}".format(ilk, serder.ked))
So you are saying it isn't in the doc but the code is expecting it?
YEs
Want me to make an issue for the doc?
I just assumed code was correct
So that doc is waaaaay out of date (I thought I mentioned that before). Technically it is not needed, but it doesn't hurt anything having it there as a helper.
But this is kind of a murky case
Ahh I see
You did mention it
So yeah, a PR for that doc would be amazing!!
Maybe we should add a note at the top
I could update the whole thing if you want
To the best of my understanding
See now you're just teasing me.
and then put a note that says 'this will go out of date, code is the reference'
If you submit that, I'll review it for sure!
Okay I'll create an issue and then assign it to myself
Thank you
I wasn't sure if a nonce field made sense for the Management TEL so I didn't add it to the docs. Let me know if I should. I didn't find much to improve other than the omissions of fields.
I will read thru it later this afternoon.
The nonce is only needed in the registry TEL to ensure uniqueness for an AID creating multiple registries.
<@U0507HA0BR9> has joined the channel
I updated the doc with some more bits and updated the PR description. Leave comments and I can change whatever you think should be changed.
Here's a corrected version of the diagram with the `ri` mapping present on the `rev` event:
ACDC Issuance and Revocation.png
I mentioned before that I wrote `Creder` code, but it isn't in `cesride`. I'll post it here and in an issue on GitHub for posterity. The only reason I haven't added it is because I haven't had time to write tests.
use crate::error::{err, Error, Result};

use cesride::{
    common::{Identage, Ids, Serialage, Version, CURRENT_VERSION},
    data::{dat, Value},
    matter, Sadder, Saider,
};

#[derive(Clone, Debug, PartialEq)]
pub(crate) struct Creder {
    code: String,
    raw: Vec<u8>,
    ked: Value,
    ident: String,
    kind: String,
    size: u32,
    version: Version,
    saider: Saider,
}

fn validate_ident(ident: &str) -> Result<()> {
    if ident != Identage::ACDC {
        return err!(Error::Value);
    }

    Ok(())
}

#[allow(dead_code)]
impl Creder {
    pub fn new(
        code: Option<&str>,
        raw: Option<&[u8]>,
        kind: Option<&str>,
        ked: Option<&Value>,
        sad: Option<&Self>,
    ) -> Result<Self> {
        let code = code.unwrap_or(matter::Codex::Blake3_256);
        let creder = Sadder::new(Some(code), raw, kind, ked, sad)?;
        validate_ident(&creder.ident())?;

        Ok(creder)
    }

    pub fn new_with_ked(ked: &Value, code: Option<&str>, kind: Option<&str>) -> Result<Self> {
        Self::new(code, None, kind, Some(ked), None)
    }

    pub fn new_with_raw(raw: &[u8]) -> Result<Self> {
        Self::new(None, Some(raw), None, None, None)
    }

    pub fn crd(&self) -> Value {
        self.ked()
    }

    pub fn issuer(&self) -> Result<String> {
        self.ked()[Ids::i].to_string()
    }

    pub fn schema(&self) -> Result<String> {
        self.ked()[Ids::s].to_string()
    }

    pub fn subject(&self) -> Value {
        self.ked()[Ids::a].clone()
    }

    pub fn status(&self) -> Result<Option<String>> {
        let map = self.ked().to_map()?;

        if map.contains_key("ri") {
            Ok(Some(map["ri"].to_string()?))
        } else {
            Ok(None)
        }
    }

    pub fn chains(&self) -> Result<Value> {
        let map = self.ked().to_map()?;

        if map.contains_key("e") {
            Ok(map["e"].clone())
        } else {
            Ok(dat!({}))
        }
    }
}

impl Default for Creder {
    fn default() -> Self {
        Creder {
            code: matter::Codex::Blake3_256.to_string(),
            raw: vec![],
            ked: dat!({}),
            ident: Identage::ACDC.to_string(),
            kind: Serialage::JSON.to_string(),
            size: 0,
            version: CURRENT_VERSION.clone(),
            saider: Saider::default(),
        }
    }
}

impl Sadder for Creder {
    fn code(&self) -> String {
        self.code.clone()
    }

    fn raw(&self) -> Vec<u8> {
        self.raw.clone()
    }

    fn ked(&self) -> Value {
        self.ked.clone()
    }

    fn ident(&self) -> String {
        self.ident.clone()
    }

    fn kind(&self) -> String {
        self.kind.clone()
    }

    fn size(&self) -> u32 {
        self.size
    }

    fn version(&self) -> Version {
        self.version.clone()
    }

    fn saider(&self) -> Saider {
        self.saider.clone()
    }

    fn set_code(&mut self, code: &str) {
        self.code = code.to_string();
    }

    fn set_raw(&mut self, raw: &[u8]) {
        self.raw = raw.to_vec();
    }

    fn set_ked(&mut self, ked: &Value) {
        self.ked = ked.clone();
    }

    fn set_ident(&mut self, ident: &str) {
        self.ident = ident.to_string();
    }

    fn set_kind(&mut self, kind: &str) {
        self.kind = kind.to_string();
    }

    fn set_size(&mut self, size: u32) {
        self.size = size;
    }

    fn set_version(&mut self, version: &Version) {
        self.version = version.clone();
    }

    fn set_saider(&mut self, saider: &Saider) {
        self.saider = saider.clone();
    }
}
<@U04RNMG8Z51> has joined the channel
<@U03EUG009MY>
Screenshot 2023-04-27 at 11.30.17 AM.png
Thanks! got it in the notes for
<@U0448S72CQN> has joined the channel
<@U055XBX2EAD> has joined the channel
daniel.andersson
Hi Kent, I appreciate your assistance in looking into this issue. Here are the steps to reproduce the problem, which I encountered with a fresh Docker installation on a MacBook M1: 1. Run the following command to pull the KERI Docker image and start an interactive shell: 2. `docker run --rm -it gleif/keri /bin/bash` 3. Once inside the Docker container, try executing a command using the KLI (Keri Command Line Interface), for example: 4. `kli witness demo` Upon running the KLI command, the following error message appears: `ERR: /usr/local/var/keri/ks/wan: Function not implemented`. My suspicion is that this issue could be related to the MacBook M1 architecture, a possible path error, or a compatibility issue with one of the third-party libraries used by KERI. As I mentioned in my previous message, I was able to run the same commands without any issues using the 0.6.8 image tag. However, the problem persists in the latest and other mentioned image tags (1.0.0, 0.7.4, and 0.7.3). I've also tested this with the help of some colleagues who used different machines, and it seems that only those with MacBook M1 encountered this error. Please let me know if you need any further information or clarification. Also, do you think it's a good idea for me to file this issue on the GitHub Issues page? Thank you for your help!
andreialexandru98
I ended up having to pip install --no-cache-dir lmdb, falcon, netifaces again and git it to work
For those wondering about parsing/verification code in KERI/ACDC using Rust, I wrote some untested stuff and put it in the `parside` .
I think it's missing a couple checks but it is a good start.
For instance, on re-reading I don't see anything ensuring that the next set of keys is the one used to sign a `rot` message - the digests from the prior establishment event should be verified. As I said, untested.
Oh, no it's okay. We get the prior est event and then grab its `verfers`. So nothing to worry about regarding the `rot` events but I'd still be wary of trying to use that code directly
Wait that doesn't make sense, they are digests
Yeah I do think we're missing a check.
            for (i, diger) in pserder.digers()?.iter().enumerate() {
                if !diger.verify(&verfers[i].raw())? {
                    return err!(Error::Verification);
                }
            }
needs to be added I believe.
(untested)
hah i was incorrect, pserder may not be est. so we need to do more logic. but this is useful information for everyone.
I made this issue to track this, and also verified the code:
daniel.andersson
I created a "custom" Dockerfile to use while figuring out why the official one doesn't work with M1. What's confusing is that it did work perfectly, until it didn't - without any insights into what might have happened. Anyway, this is the Dockerfile i'm using:
# Use the official Python base image
FROM python:3.10

# Install required system packages
RUN apt-get update && \
    apt-get install -y git libsodium-dev curl

# Install Rust and Cargo
RUN curl --proto '=https' --tlsv1.2 -sSf  | sh -s -- -y

# Update the PATH environment variable to include Cargo
ENV PATH="/root/.cargo/bin:${PATH}"

# Clone the KERI repository
RUN git clone  /keripy

# Set the working directory to the KERI repository
WORKDIR /keripy

# Install KERI package and its dependencies using setup.py
RUN python -m pip install -e ./

# Set the working directory to /app
WORKDIR /app

# Copy api specific requirements.txt into the container
COPY app_requirements.txt ./

# Install Python dependencies from api specific requirements.txt
RUN pip install --no-cache-dir -r app_requirements.txt

# Make port 5050 available to the world outside this container
EXPOSE 5050

# Copy the current directory contents into the container at /app
COPY . .

# Set the ENTRYPOINT for the container
ENTRYPOINT ["sh", "entrypoint.sh"]
<@U056E1W01K4> has joined the channel
Awesome, thank you. I’ll read through it and post my detailed reply here.
I have a scenario I'm trying to figure out - multi-sig, partial rotation. Consider the case where rotation authority has been split amongst several key holders, and one of them will no longer participate in rotation events. It should be possible to rotate that party out if the signing threshold permits, but how do you construct the rotation event without the unblinded public key from the previous event, if the removed party won't or can't produce it? Can I just put a bogus key there and let the verification fail since the threshold will still be met? It has no signing authority anyway, only one use rotation - so it isn't really a risk, I don't think. In my code I actually check that the digests all match from event to event, but that obviously won't work in this scenario. I'm going to try constructing such an event sequence to see what KERIpy thinks of it.
KERIpy is fine with this.
keri: Kever state: b'EJnwGc6EDoCrwqOy0KuQirScW9DJH9LFcmkxjEbhfAXY' Added to KEL valid event=
{
 "v": "KERI10JSON000234_",
 "t": "rot",
 "d": "EFipheqOSlqw3e6me_P1MiJ1wMUWQjA4vFSEn3BVv1Cv",
 "i": "EJnwGc6EDoCrwqOy0KuQirScW9DJH9LFcmkxjEbhfAXY",
 "s": "2",
 "p": "EGBlaW8bsmrm5MmgTSXKGFcVglKfPUn-LkNYwnM8a99B",
 "kt": [
  "0",
  "0",
  "1/2",
  "1/2"
 ],
 "k": [
  "DPFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
  "DPE9fWVXKz3eLDFN0YqSFeO2TS2teI7lW1x14ye3k9pR",
  "DIT638f_Ph1gLg9RhCiDbPj62Qp1NF3MVvXYsrbIgtyF",
  "DPgrQmeG7qQt1UnpjWmBaNpAPhvHf93rbrEEtQ1pSzKI"
 ],
 "nt": [
  "1",
  "1"
 ],
 "n": [
  "EIkIx_uZ5w1DVIdwWdbiRRCTsj14-F1kuGL81xb05GSX",
  "EA5t2o71moYZNvKVQ3RCaLmOacQXl3RgwWn7ib77eTea"
 ],
 "bt": "0",
 "br": [],
 "ba": [],
 "a": []
}
It should be clear which key I rotated out
I modified my code to only check the verified indicies for things like this, everything seems okay, but it feels weird to put fake keys in these events.
I guess maybe you could just omit the keys that aren't going to be used and remove the corresponding 0 weights from `kt` I'll try this and see how KERIpy likes it
keri: Parser msg non-extraction error: Failure satisfying nsith=['1', '1'] on sigs=['AACtfcyFNeFOjbByUWFDjM70tFdBxf5BRWChNNySlZIN-uamT2_kc_CGrcob4WRTIWTtIb5v1BtxonuKW3dsbc4B', 'ABA1BAbf32yZlhf0HxBlrrk_dUaqluvX4fjv7CRZY8qhA1JZq4qW2FdnReuPN3jEQODUBlXGHn11rNgmejRjqOsD', 'ACAeUpa4TpOwCk-LiNHwCxjEkR9w492hGHJkm3ixvryrm-T5TAa0tuKDKVnF5qWhwUsCf9EHM8s4HGEZjHdT1dsC'] for evt={'v': 'KERI10JSON000201_', 't': 'rot', 'd': 'EDDXZQ290PLKxid5_iHItvOlC2TD-2lgdoq3W-tkQ1_7', 'i': 'EMHcIZJvRZgszqcOEwFY_u50ytN2ZGMmMb8yyZ5aC4WH', 's': '2', 'p': 'EFi6G12KTBxVCjVMiy4pl_C7b8_lEzleRRlfARvdklOB', 'kt': ['0', '1/2', '1/2'], 'k': ['DPE9fWVXKz3eLDFN0YqSFeO2TS2teI7lW1x14ye3k9pR', 'DCGa1idTkUD4wTBgP02pN56wIcV2AWfcJlzop9SnDvXv', 'DCbVbczU9GaIT9BU6PtMA2vArTmRgudyJyWOTm7RJ9Cn'], 'nt': ['1', '1'], 'n': ['EIkIx_uZ5w1DVIdwWdbiRRCTsj14-F1kuGL81xb05GSX', 'EA5t2o71moYZNvKVQ3RCaLmOacQXl3RgwWn7ib77eTea'], 'bt': '0', 'br': [], 'ba': [], 'a': []}
I made my code pass. I'll post the KEL next and explain how.
{"v":"KERI10JSON000199_","t":"icp","d":"EMHcIZJvRZgszqcOEwFY_u50ytN2ZGMmMb8yyZ5aC4WH","i":"EMHcIZJvRZgszqcOEwFY_u50ytN2ZGMmMb8yyZ5aC4WH","s":"0","kt":["1/2","1/2"],"k":["DLQ0dV8X2SSaM0HWRQBMcdYVhvwaCesKbGdsfMuoUtU1","DDH-NjWAUQAuJtmoZ-3Vo0JThr-VK3PRZ5WClWh8ksXq"],"nt":["1","1"],"n":["EBgxbX-PxloGb-UzIuLGm9OHZ2h4yqSawHeqzxaSJVWR","EGtl5QBUy3O0BOFB5mCdOaOuy9cOTQ8mjaID_9B8quEc"],"bt":"0","b":[],"c":[],"a":[]}-VAt-AACAAAuic32VIDz7Op6JnKQ5_sgb7w52iOzybjeAK1nAUy_BYDPCvOJ2Qz198HcYiiGER97zq2p9ePNfGv5MZQY7ksJABA2AEsWcQtqV6rzd_W-BbnfVjm7YX66FTAfTx06RemPHcb3jJ_B9rS6aOOfyqXi88w0DaPHTAouOSRNvGO6pMMK
... some ixn events ...

{"v":"KERI10JSON000201_","t":"rot","d":"EDDXZQ290PLKxid5_iHItvOlC2TD-2lgdoq3W-tkQ1_7","i":"EMHcIZJvRZgszqcOEwFY_u50ytN2ZGMmMb8yyZ5aC4WH","s":"2","p":"EFi6G12KTBxVCjVMiy4pl_C7b8_lEzleRRlfARvdklOB","kt":["0","1/2","1/2"],"k":["DPE9fWVXKz3eLDFN0YqSFeO2TS2teI7lW1x14ye3k9pR","DCGa1idTkUD4wTBgP02pN56wIcV2AWfcJlzop9SnDvXv","DCbVbczU9GaIT9BU6PtMA2vArTmRgudyJyWOTm7RJ9Cn"],"nt":["1","1"],"n":["EIkIx_uZ5w1DVIdwWdbiRRCTsj14-F1kuGL81xb05GSX","EA5t2o71moYZNvKVQ3RCaLmOacQXl3RgwWn7ib77eTea"],"bt":"0","br":[],"ba":[],"a":[]}-VBD-AADAACtfcyFNeFOjbByUWFDjM70tFdBxf5BRWChNNySlZIN-uamT2_kc_CGrcob4WRTIWTtIb5v1BtxonuKW3dsbc4BABA1BAbf32yZlhf0HxBlrrk_dUaqluvX4fjv7CRZY8qhA1JZq4qW2FdnReuPN3jEQODUBlXGHn11rNgmejRjqOsDACAeUpa4TpOwCk-LiNHwCxjEkR9w492hGHJkm3ixvryrm-T5TAa0tuKDKVnF5qWhwUsCf9EHM8s4HGEZjHdT1dsC
Here's the pertinent Rust code to making these events be acceptable:

let mut v = vec![];
for index in verified_indices {
    v.push(&verfers[index as usize]);
}

let mut indices = vec![];
for (i, diger) in digers.iter().enumerate() {
    for verfer in &v {
        if Diger::new_with_ser(&verfer.qb64b()?, None)?.qb64()? == diger.qb64()? {
            indices.push(i as u32);
            break;
        }
    }
}

if !ntholder.satisfy(&indices)? {
    return err!(Error::Verification);
}
it basically allows reordering of keys, removal of keys, while still ensuring the prior next threshold is satisfied (with respect the previous next threshold indices)
should probably return a validation error there
and obviously one could optimize that code, i'm not being very efficient
here's some better code:
let mut verified_digers = HashSet::new();
for index in verified_indices {
    let verfer = &verfers[index as usize];
    let diger = Diger::new_with_ser(&verfer.qb64b()?, None)?;
    verified_digers.insert(diger.qb64()?);
}

let mut indices = vec![];
for (i, diger) in digers.iter().enumerate() {
    if verified_digers.contains(&diger.qb64()?) {
        indices.push(i as u32);
    }
}

if !ntholder.satisfy(&indices)? {
    return err!(Error::Verification);
}
Or wait is this problem solved more elegantly by indexes? I feel like indexes can be hard to manage in a multi-sig custodial situation, the tool I built to do the key management is very secure and leaves no trace, so asking people to coordinate indexes in that situation or even tracking these things becomes problematic. I just generate `Cigars` and then extract them into raw. I think the core issue here though, is that the keys themselves are an ordered list and signatures have indexes encoded into them - the lists can't have things removed easily unless we do something like I am suggesting above. For instance, if you remove element 3 how do you know the third element should have index 4? Your signature indexes won't make sense, so you need to put something in its place which seems a bit awkward. The solution I propose treats each event independently, matching keys by digest, not by list order. The list order is only enforced for the current event and of course the following messages until the next establishment event. So, every event has an ordering enforced, but from event to event that order can change. I think this should be okay.
But please let me know if I am incorrect!
<@U03N48D5VD1> has joined the channel
The new partial rotation logic does rely on dual indexes in the signatures for this very reason
Oh I see! That's what ondex is for?
So if positions change because, for example you are leaving someone completely out of a rotation you specify the current signing index and the prior rotation index. Yes, that is what ondex is for
I had wondered that for a while now, thanks
LOL, yeah its not overly obvious!
Okay cool, I can reimplement and get both KERIpy and my code to agree. Thanks!
If you look at `test_the_seven` in `test_grouping.py` you should see a working example of partial rotation if I recall correctly.
oh awesome, thank you
(yes, the name was inspired by GoT)
hahah! for closure, I got my code to produce output accepted by KERIpy and Rust using ondexes Thanks for all your help!
This Thursday is the first of the new `keri-dev` calls. GLEIF has created a recurring meeting to record the new calls: Meeting ID: 816 7978 2107 Passcode: 379242 (I'll pin the meeting info) As WebOfTrust is not an official standards organization we'd like to propose the following licensing for the calls we ensure no future uncertainty about how contributions are made:
These calls utilize the following Intellectual Property Rights (Copyright, Patent, Source Code): Copyright mode: Apache2 and IETF Copyright Patent mode: Apache2 Source code: Apache 2.0, available at .
If people have concerns we should address them before the first call.

The expectation of these calls is to facilitate those actively contributing to a project/interop under the KERI umbrella, the outputs of these calls are not generating specifications but code. Decisions/resolutions should be in the form of GitHub Issues/Discussions.
Note that I had to log in to Zoom to get in. So no anonymous access. :+1: It said my email had to be verified, but it looks like it took me in without any verification.
here's some for everyone. cc <@U024KC347B4>
rodolfo.miranda
What time is the keri-dev meeting today?
7:00am PDT
I'm getting an `Invalid Meeting Id (-1)` error
Same
weird petteri and I are here...
me too
There are two links
Top link
ID -q
-1
Click on the link, not the topic
I added a "Meetings" category and updated the discussion from today.
now has the document, so issues and PRs can be opened here and the other signify libs can point to it.
<@U055XBX2EAD> missed his connection to the Washington DC conference :disappointed: so I didn’t get a chance to invite him, in-person, to our weekly keri-dev meeting on Thursdays. But hopefully he can join us next week or soon to discuss did:keri
Yes :( Will you still be in Washington DC tomorrow, or no? I just landed now...
In any case, definitely planning to join Thursday meetings in the future.
The conference was a little bit too surface level for me to come back tomorrow. I could be convinced to come down there to work on did:KERI for 4 hours :laughing:
Well let's just say I'll be there all day tomorrow and have no plans yet :)
rodolfo.miranda
<@U024CJMG22J>, I noticed that in the keripy `Salter` you or Sam hardcoded libsodium parameters with fixed numbers instead of using library constants:
if tier == Tiers.low:
   opslimit = 2  # pysodium.crypto_pwhash_OPSLIMIT_INTERACTIVE
   memlimit = 67108864  # pysodium.crypto_pwhash_MEMLIMIT_INTERACTIVE
elif tier == Tiers.med:
   opslimit = 3  # pysodium.crypto_pwhash_OPSLIMIT_MODERATE
   memlimit = 268435456  # pysodium.crypto_pwhash_MEMLIMIT_MODERATE
elif tier == Tiers.high:
   opslimit = 4  # pysodium.crypto_pwhash_OPSLIMIT_SENSITIVE
   memlimit = 1073741824  # pysodium.crypto_pwhash_MEMLIMIT_SENSITIVE
I'm guessing that the harcoding is for protecting from changes on  the library.
Should we use the same pattern in the the typescript and  and rust implementations?
Yes, that’s the idea
rodolfo.miranda
Thanks. Just harcoded that way in ts.
I believe Jason addressed this in Rust, I think I remember the conversation
Yes he did. He also submitted the PR to keripy to remove one const reference we had
rodolfo.miranda
Also, paths on the salty for the client AID are `signify:controller00` and `signify:controller10` and not `signify:controller01` I'll submit the correction
Just so everyone is aware, there is sometimes a difference in argon2 params - memlimit in particular i've seen expressed in `kb` and `b` i believe
nuttawut.kongsuwan
If I understand correctly, the current implementation of KERI keeps the current and next keys in the same key stores. I believe that, in theory, a controller may keep the keys in different locations, e.g., the current key in a hot wallet and the next key in a cold wallet. May I ask if this feature is simply not prioritized or if it is unnecessary?
Has anyone dealt with
ERR: 'NoneType' object has no attribute 'accept' 
when trying to run
kli witness demo
rodolfo.miranda
You are correct that `keripy` uses the same keystore for both keys. Now the efforts are more focused on the Signify+KERIA architecture. In Signify, that has the role of key generator and signer, there's a way to use HSM to store the keys with an extension module. It shouldn't be difficult to create a module that implements the separation of keystorage that you mentioned.
Restarting my computer finally cleared the problem but I don’t know what was actually wrong ;p
rodolfo.miranda
you need to delete old keystores and start fresh :slightly_smiling_face:
I was doing a lot of stops/starts via bash script. I always rm -rf /usr/local/var/keri/*;rm -Rf ~/.keri/* before running the witness demo….. but maybe something got in a weird state
rodolfo.miranda
check the folder where you run the command. It may create config files that may need to be deleted or pointed to the right place
rodolfo.miranda
and check if there are no other witnesses running in your system. kill them all
<@U035R1TFEET> Yes, I have encountered them before. Deleting the `keystore` won't solve all the problem. You should kill the `kli witness demo` process(es)
Thank you all! I’m not sure why I wasn’t finding the stray process but i’ll look harder next time :slightly_smiling_face:
That happens whenever a HIO process attempts to bind to a port that is already in use. In this case you probably had witnesses already running somewhere.
andreialexandru98
I usually run this to kill the witness processes `lsof -P | grep "5632 (LISTEN" | awk '{print $2}' | xargs kill -9`
I just hit Ctrl-C, lol
nuttawut.kongsuwan
That would be fantastic!
Ugh sorry to miss the weekly meeting today. I’m no good without a calendar reminder…. i’ve added that now so should be good in the future. Is there a recording I can review?
as soon as I get an email saying it is available
Reading through the initialization logic for `kli init` alone is enough to indicate the scale of KERI and ACDC. `Baser` has 73 separate attributes, some separate LMDBs, some sub DBs. `Reger` has 32 separate attributes, again some their own LMDB instances, and some sub DBs. So, 105 separate datastores, I’m thinking somewhat similar to a table in a database. How many of those separate DBs or sub DBs are used or unused? It seems we have to assume all of those DBs are in use and required. I ask the question because when I did the comparison of the Agent REST APIs for KERIpy vs KERIA I found six unused endpoints that were only stubs. I’m sure there was an important reason for them to exist at some point though those six endpoints were left behind as everything else took shape. Is there a similar story for the database attributes of `Baser` and `Reger`?
A simple "Find Usages" with your IDE should let you know.
Yes, I will be going through that for each attribute. I figured I’d as the question here before I go track all 105 database usages down to get a grip on what is being used.
May I ask to refrain from the word ‘simple’ or variants like ‘simply’ and ‘straightforward’?
One way to diminish questions fired at the first group of developers is to document status (under construction, tested, active, inactive, depreciated, etc) and also: to always and everywhere “clean up after the dog”. What’s clearly not in use or an abandoned idea for the leading group, is often a puzzle for those confronted with it. If we don’t closely track status, we will get questions, and rightfully so. Just my two cents. Don’t want to offend anybody (is not productive) but make a case for a better and faster learning environment to accommodate a growing community.
Some times people are faced with unrealistic expectations in the form of too few people, too much scope and too little time. When that happens, it is not always feasible to clean up anything, but instead just put your head down, work 80+ hours a week for months on end and deliver the impossible. And since we are critiquing replies here in Slack I’d like to add that this post comes with the assumption that we who have bled over the development of KERI so far don’t know or understand that delivering the perfectly documented and maintained free software to others is ideal. Like we don’t wish with all our hearts that we could cross every T and dot every I and document every single line of code. I do take offense to that.
Sorry this happened. There’s no assumption like this from my side. How can we change it for the good of both angles?
Today’s dev call made an important distinction between the User AID (on a person’s device) and a client/agent AID in the cloud. • The user’s AID is under the control of their KERI controller on their device. • The agent’s AID is under the control of the cloud host who is hosting the KERI controller for a given agent. Does the following diagram illustrate the distinction properly?
My notes on what Sam said seem to indicate this (paraphrased): > The keys to the Agent controller are completely separate from the keys to the user’s controller on their device. The Agent is acting on behalf of the user, though the AID for the agent controller, typically in the cloud, and the AID for the user’s controller, on their device, are different.
*Updated the picture to have less extra whitespace
A few questions in regards to the "First Seen Replay Couples". • Does anyone know why "First Seen Replay Couples" have both the `sn` and the date-time stamp? Looking at the KELs, there's always one entry in the couples' counter and the entry's `sn` matches that of the event it is attached to. So basically only the date-time stamp matters. Or does `sn` in the FSRC have additional significance? • Is it correct to assume that the counter, i.e. an ability to have multiple couples, is something for the future? • Finally, are FSRCs used for anything significant besides as an meta data ? I couldn't find anything in the Keripy's code. Perhaps I missed it?
I _believe_ in the case of a superseding rotation event the `fn` would be different from the `sn`. It is difficult to trace, but in Kevery process event this code comment explains the use of first seen couples: > _firner is optional Seqner instance of cloned first seen ordinal_ > _If cloned mode then firner maybe provided (not None)_ > _When firner provided then compare fn of dater and database and_ > _first seen if not match then log and add cue notify problem_
Thanks a lot, <@U024CJMG22J>! I remember puzzling about that comment. I was just re-reading the whitepaper and realised that "establishment event" class (e.g. rot, drt) can supersede other events ( e.g. ixn) at the same location in order to enable recovery from compromised keys . Therefore, the `fn` is version of the sequence number of the event as seen by a witness ( including compromised events) and `sn` is the correct event number from the point of view of the controller. Got it!
No, this is not accurate. There is no key store on the Edge client with current implementations of the Signify client.
Ok, I misunderstood. So are the keys stored only on the cloud agent’s key store? And if so, are the keys reconstituted on the client every time something needs to be signed or is the key encrypted and then decrypted in the cloud server each time the key is used?
No unencrypted private key material ever exists in the cloud, that's the whole point.
For random key generation, the private key(s) and private next key(s) are encrypted using the salt entered by the user and stored on the agent.
For salty keys, each AID gets its own randomly generated salt that is encrypted and stored on the agent. The keys are then recreated by decrypting the salt on the client when needed
For KSM key generation all key material stays in the KSM
I've thought about allowing Signify Clients to chose to store the encrypted key material somewhere besides the agent. A mobile phone is a good use case for that. But current signify clients don't allow that.
rodolfo.miranda
We can easily develop an extern module to store keys in any storage not necessarily HSM
Yes, but that would require generating keys and signing in the extern module. I was thinking more of using the current key generation and signing and just storing the keys somewhere else.
rodolfo.miranda
Yes. Im thinking of a extern module based on signify. So it will be an external that reference internally:stuck_out_tongue_closed_eyes:
So the salt is stored in the agent?
So is the only place that unencrypted keys exist the client during signing or rotation?
There is a new salt for every AID. That is stored, encrypted on the agent
What is stored on the client that is required to decrypt the keys stored on the server? Is it just the contents of the `bran` argument in each case? • for salty keys is it the `bran` argument passed to the `SaltyKeeper` ? • for randy keys I see the `bran` argument passed to `SignifyClient` that is then passed to the `Controller` and finally the `coring.Salter`.
And I imagine it is an implementation detail of the client, whether mobile phone or other client, to store and manage the `bran` values.
As I said, nothing is stored. The user entered passcode is used
Only the passcode(bran) is required to decrypt the encrypted keys or salts. Passcode is provided by the user and its not stored. Passcode is stretched to a seed and then seed is used to create a X25519 keypair. private key material is encrypted by the X25519 public key and decrypted by the private key.
Thank you, this makes sense.
So then, what is the difference between the user AID and the agent AID?
I’m trying to understand the distinction Sam made about these on the call yesterday.
At a base level, Sam wants to distinguish that what was being called the ‘controller’, at the edge, is not technically the cryptographic controller of the agent. The agent AID has its own keys and is a delegated AID from the user AID…. so its technically incorrect to call the user the ‘controller’ of the agent. Then another important distinction that we made is that originally we called the edge AID the ‘client’ but that can invoke the idea of a particular device (web client, mobile client, etc)…. so we settled on ‘user’ to help the understanding that the user AID can be used across devices. The nuances of a user AID and delegated agent AID provide excellent features for us relative to other SSI edge/cloud stacks so precision in the vocab is worthwhile (and i learn a ton from these nuanced discussions)!
Help me clarify two terms: “edge AID” and “delegated AID”. Does the term “edge AID” make sense if all of the key material is stored in the cloud agent? From what Phil and Arshdeep said earlier the passcode, which is stretched, is the only thing that comes from the user’s device. This is why I am confused. I see in the SignifyClient that the passcode is passed in, the `Controller` in the `SignifyClient` is created, which has a `coring.Signer` in it, yet later the SignifyClient is used to create AIDs. Are these created AIDs the “delegated AIDs”? And is the `coring.Signer` the edge AID?
What I am getting at with all of these questions is an understanding of what AID to issue credentials to. The options seemed to be a user AID and an agent AID based on yesterday’s call. Yet, I don’t see anywhere in SignifyPy where there is a concept of a “user AID.” I see a `coring.Signer` that is based on the passcode/bran the user passes in. This seems like the closest thing to a “user AID” to me though I would think we would only call something an AID if it has a KEL. I thought there were separate AIDs for the user and for an agent, though it seems there are only AIDs created within an agent (KERIA agent) on behalf of a user using a Signify client.
I’m sure there’s something I’m missing about these concepts of distinct user/edge and agent/delegated AIDs. When I see something like `aid = identifiers.create("aid1")` in the Signify code (from test_connect in test_clienting) then I am thinking that is an agent AID, not a user AID. Am I wrong there?
And can we really call the agent AID a delegated AID or is that term overloaded in the context of this discussion?
rodolfo.miranda
Agent AID and User AID (also known as Client or Controller in the code) are just AIDs to communicate each other in a authenticated way and to encrypt data. The useful AIDs are called Managed AIDs and are the ones created in the `/identifiers` interface. Witnesses, credentials, rotations, interactions, all are related to the managed AIDs. The events on those managed AIDs are created and signed at the User side, and then submitted to the Agent to make it available in live agent that can talk to witnesses and other keri agents. The Agent also acts as a persistent cloud storage for the User, to store the encrypted salts, so she/he can recover all private keys materials when needed.
Phil initially avoided making the agent AID a delegated AID (for convenience, but eventually was going to) and last month he upgraded KERIA/SignifyPy to have the agent AID be a delegated AID. Rodo/Alex have implemented Signify-TS that way as well. So the agent AID is truly a KERI delegated AID
Thanks, this clarifies things.
What is the location each AID is stored? I would think the Agent AID is stored in the cloud storage for the Agent, though where is the User AID stored, meaning the private key material and key event log?
This is helpful though, thanks Rodolfo. So the events are created and signed at the User side and then submitted to the Agent. Does this mean wherever the Signify client is running? If so then I would think that this makes the user responsible for key storage whether an HSM, a mobile app (Signifide, SignifyTS+native plugin).
rodolfo.miranda
The User AID is created in a deterministic way just by the passcode provided by the user (1 key, 1 next key, transferable, no wits, ED25519). That AID then have an interaction event only to approve the the Agent AID as a delegatee, but this event can be recovered from the agent.
rodolfo.miranda
SO the User only need to remember the passcode
rodolfo.miranda
After entering the passcode, he can start a secure communication with the Agent
rodolfo.miranda
and for example, ask to the `/identifiers` method to send all his identifiers already created
rodolfo.miranda
and also ask for all encrypted salts used on these identifiers
rodolfo.miranda
the encryption is made by using his User AID, so with just the passcode he is able to decrypt all the salts and regenerate the private keys
rodolfo.miranda
Back to your questions: Signify runs in a browser or mobile cliente. It's actually an SDK, you just call methods on the SDK to perform actions. It's memory is volatile. After you kill your browser, you lost everything. That's the idea because you can recover all info just with the passcode.
rodolfo.miranda
The Agent AID and all your Managed AIDs runs on KERIA
rodolfo.miranda
KERIA = in the cloud with persistent storage and networking
Got it, this makes sense. Thank you so much Rodolfo.
Thank you <@U03P53FCYB1> and <@U035R1TFEET>. This all makes much more sense now. I will read through the SignifyTS code to verify that I understand everything you told me. I still have a question about the issuee in credential issuance. I will repeat what I believe you two have told me and then reiterate my question.
The User AID is created from a combination of the passcode the user enters and a salt stored on the Agent. The user passcode, or bran, is shown below.
bran = b'0123456789abcdefghijk'
client = SignifyClient(passcode=bran, tier=tier)

client.connect(url=url)

identifiers = client.identifiers()

aid = identifiers.create("aid1")
The above all occurs on the client. According to Rodolfo the User AID is used to send an inception event over the internet from the SignifyClient to the KERIA Agent during inception. This sets up a delegated AID (in SignifyTS) which is properly delegated from the User AID’s KEL to the Agent AID’s KEL through the use of seals.

The User AID can only be used by providing the passcode. It is only ever in volatile memory so as soon as the browser or mobile client is shut down then the User AID is removed.

What I’m still confused about are:
1. Whether credentials are issued to the User AID or Agent AIDs, which is similar to what you asked me yesterday.
2. Whether the User AID can be rotated as well as where the User KEL would be stored.
It appears the User KEL is never stored and is only created as needed on-the-fly to send an inception event one time at the beginning of setting up an agent.
I know we can’t answer the question authoritatively until code exists and no credential issuance code for KERIA exists yet. Maybe this is a question only Phil can answer, unless either of you have heard of what the plan is on credential issuance.
rodolfo.miranda
"The User AID is created from a combination of the passcode the user enters and a salt stored on the Agent". No, the User AID is just created with the passcode, in a deterministic way defined here:
rodolfo.miranda
'1. Whether credentials are issued to the User AID or Agent AIDs, which is similar to what you asked me yesterday." Credential should be issued to Managed AIDs. User and Agent AID are needed only for a trusted communication between Signify and KERIA.
rodolfo.miranda
"2. Whether the User AID can be rotated as well as where the User KEL would be stored.". User AID does not rotate, it's only derived from the passcode. What a User can do is to rotate the passcode and hence create a new User AID. Passcode rotation needs to re-encrypt all your salts. Some explanation of the process is here: . User AID KEL consists of only two events: 1- the inception event that can be regenerated just from the passcode. 2- the interaction event that can be retrieved from the Agent and then be validated.
This is very clear. Thank you. I’ll read up those linked documents to finish off my understanding.
Not sure the proper etiquette for this so I'm just going to drop it here: I would like to add did:keri/did:web to the agenda for tomorrow morning's KERI Dev call.
andreialexandru98
This is where the salt for the client aid gets generated
andreialexandru98
Can we also discuss the data flows between the keria agent and a general backend service as well as talking about how we want to integrate the mailbox concept in keria? My understanding is a little fuzzy on that!
If <@U055XBX2EAD> can make it, he could give some interesting insight
Drummond Reed has reached out to Markus
Amazing I wanted to bring this up
Hi I'm at the DICE conference in Zurich but will try to join today
Hi I can join in 10-15 min, sorry still at the conference..
Oh it's at 7am PT, right?
Yes, I added you to my meeting invite
I missed the call today. Out of town. I am looking forward to seeing the recording.
You are missing a really good one!!
Great session indeed!
Ahh, sad to miss. I was at a conference today. I will watch the recording. Has it been posted yet?
Yep, I see it. Thanks <@U024KC347B4>
Hey everyone. I'll miss the KERI and KERI-dev meetings this week due to an onsite with my company, but I look forward to sharing some things we've learned using ACDCs in complex scenarios at next week's ACDC meeting!
I can also talk about our experience starting with orchestrated direct mode and no witnesses and how we plan to evolve that into a fully functioning system, but maybe that's a better topic for a meeting like today's or one of the dev calls.
rodolfo.miranda
love to hear it
Gonna miss today as well. I’ll catch the recording.
me too
Can we get the Zoom recording link for the 6/15 meeting?
Wow, the 6/8 meeting on did:web + did:keri really is worth watching! You all re-imagined DNS name transfers with KELs and anchors! This is really cool.
And a quick explanation of how to do NFTs with KERI was useful as well.
It was so satisfying to hear Stephen Curran describe the possibility of the important DID methods being reduced down to: - did:web - did:keri - did:key - did:peer And the other methods just being unnecessary. This is big.
Right at 54:16
michal.pietrus
<@U03EUG009MY> where did you find the recording link?
Haven’t got it yet.
michal.pietrus
apologize, I meant the 6/8 one
Here:
If KERIA is telling me `ERR: unable to query witness BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha, no http endpoint` does this mean I need an OOBI resolved for `BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha` (wan)?
I solved my problem. I was starting KERIA up with the wrong config file. I needed some OOBIs in my config file to bootstrap my agent config.
rodolfo.miranda
yes, you need to resolve OOBI first or have it in the keria config file
Yes, you can either preconfigure to load the OOBIs at start up or OOBI them using one of the client APIs (you know, because there are now multiple clients (TypeScript and Python) to interact with KERIA!!!
What does the MDB_BAD_RSLOT error mean? I got it today when trying to connect to KERIA with a SignifyPy client:
lmdb.BadRslotError: mdb_txn_renew: MDB_BAD_RSLOT: Invalid reuse of reader locktable slot
Should this be an issue on the KERIA repository? The way I eliminate this error is by clearing out my entire .keri directory and starting fresh.
Google the error, that’s what I would do
Good call, I was thinking it was a KERI or HIO issue, though clearly it is lmdb. it is a locking issue that can occur when using multiple processes in Python since the locks are not managed by the `lmdb` package nor Python. It seems it likely occurs when the same file is being opened twice from the same process. It looks like our options are: 1. (Preferred) to either track down that dual, or more, file access, or 2. (likely kicks the can down the road) to set `lock=False` on the `lmdb.open(…)` call (from the Stackoverflow post):
lmdb.open(db_dir, create=False, subdir=True, readonly=True, lock=False)
The current LMDB open call from `keri.db.dbing.LMDBer.reopen` is:
self.env = lmdb.open(self.path, max_dbs=self.MaxNamedDBs, map_size=104857600,
                             mode=self.perm, readonly=self.readonly)
It doesn’t seem like a big issue, just an annoyance.
You probably had another process running. This does not seem like an issue with KERIA
Yeah, that’s what I was thinking, a stray process on my dev machine. If so then this likely wouldn’t be an issue in production.
I’m getting an OOBI `not found` error while resolving an *agent* OOBI in the format shown by Is the correct format of the OOBI as follows? `http://{keria_host}:3902/{AID-of-person}/agent/{AID-of-agent}`
Yes, that works. For some reason it didn’t work the first few times, then I cleared things out, and then it worked!
My next problem is in receiving a credential. After successfully resolving the OOBI I get stuck on sending to the recipient.
kli vc issue --name {my_keystore} \
             --alias {my_alias} \
             --registry-name {my_registry} \
             --schema {cred_schema_1} \
             --recipient ${person_aid} \
             --data @/path/to/mycred-data.json
Writing credential EC2zQ2mpXjJpF8GAyBWe4b_B2LCLR41aNFXR_lu-oo_J to credential.json
Waiting for TEL event witness receipts
Sending TEL events to witnesses
Credential issuance complete, sending to recipient
It hangs on that last line of output there. Why would that be?

I imagine I have to have a registry created first in the KERIA agent in order to receive a credential. Is that correct?

The *issue-ecr.sh* script didn’t show a registry issuance, though I’ll give it a shot and report back.
Nope, that didn’t do it.
I'm starting to implement selective disclosure mechanisms. Specifically, I am wondering what the intent was surrounding the attribute aggregate - based on IPEX it seems like I need to issue and sign the most compact ACDC possible and then create variants from supporting data with more details. I only found comments referencing the aggregate in some `protocoling.py` for presentation requests. To be a bit clearer, if it's the case we expect multiple variants, what is the protocol for verification? Do we accept the compact acdc, verify, and then perform other verifications on the side for the attribute aggregate, or would we build it into the core verification logic? I think duplicity would be raised on seeing the same ACDC SAID twice, which would be a hurdle building into core logic. Also, how does one differentiate between an aggregate created as a Merkle tree and a digest created through simple concatenation? Simply by understanding the context from the schema? I have an appointment and will miss the `keri-dev` call but any insight is appreciated.
Or is there a flag to pass during ingestion that prevents the duplicity check?
I guess since the attribute aggregate is a list, and contains no said, you'd need to correlate against the compact ACDC to even verify, you can't verify the said of the aggregate from the uncompacted ACDC alone
nuttawut.kongsuwan
I had similar questions when I was reading the ACDC spec. I am really glad you ask these questions!
Actually just realized you could construct the compact ACDC and verify it matches
Still, there are questions of how to best store this data during ingestion etc
I guess what I am saying is that it would be easiest, rules permitting, to store the full, uncompacted acdc and compact portions on demand when pulling from the store. However during IPEX that doesn't make sense since you don't receive the uncompacted acdc until the end of the process. Does KERIpy include any code that deals with this?
My implementation actually bases most data access on something I call a SadStore, since SAIDs don't collide I just jam all data in there as a KV store and create references to it for things like KELs. So, I could just store the SADs of aggregates, attributes, and edges independently of the ACDC and weave it all together when pulling it from the store, that's probably the best solution since the data is broken apart and this permits destruction of the aggregate details if necessary, after disclosure, by the disclosee.
Rules section should be saidified too, I missed that one in my enumeration
michal.pietrus
solely thinking of storing them, a KV store allows you to store the whole graph of n-depth ACDC. The most compact ACDC is just an object with SAIDs . Any next step reveals some/all attrs with the content. Then, any of the sub-attr may be again a SAID, i.e. `r` attr would contain `{ roleAliability: SAID, roleBliability: SAID }`. In the end, however, all the parts are SADs (self addressing-data) identified via their SAID and having some content. Therefore going with KV makes sense. `r`, for example, may be re-used in many ACDC's , allowing you to re-use what you've already got. therefore in a KV, i.e.:
acdc:issued:its-SAID = { compact version of ACDC }
acdc:received:its-SAID = { compact version of ACDC }

// or just
acdc:issued = [ list of SAIDs ]
acdc:received = [ list of SAIDs ]
acdc:its-SAID = { compact version of ACDC }
...
note i use namespacing, useful for when scanning, i.e. in Redis (what type of KV db you'll use matters here). In `Sled` (Rust embedded KV) you'd need to model it differently.  Then for each attr, i.e.:
r:its-SAID = { content of r }
r:its-SAID = { content of roleAliability }
and similarly for any other attr.

Above approach requires many hits to DB so a
acdc:parts:its-SAID = [all keys of all parts of n-depth ACDC]
would improve it.
Is the SAID of all versions of ACDC computed on the compact variant? Perhaps that's why I'm confused
I was computing it on non-compact variants and wondering how I would perform my generic SAD validation when inserting something into my sad store, if the SAID of the compact ACDC differed from the value in "d". I think I solved my problem
Up until now I have only had unblinded, un-compacted ACDCs and have been computing the SAID on the un-compact data
Thanks for your response
Can I encapsulate anything I want in an exchange message, and just create appropriate routes for the context? I am thinking about using compact ACDCs as canonical representations, always indexing with their SAID, and storing all associated data (including the ACDC) in a SAD store. I'd like to put all that associated data in an `exn` message routed something like `/acdc/components`, and unblind any aggregate data in a `bar` message. Is this sensible?
And if it is sensible, how far off is KERIpy from supporting it?
I guess I'm wondering how to get selective disclosure implemented and not diverge from the reference implementation
(I'll be implementing in rust, but I use KERIpy to verify everything)
Yes, that is exactly the approach we are aiming for
Most compact version is the root of a tree of SADs
Excellent
Sam’s current work with CESR will allow Ilks on ACDC messages
Thus they can have types like schema or rules and be streamed independently
Awesome. Okay I'm going to rough out something now that I have confidence I can bring it in line with your approach when necessary, thanks Phil
> Is the SAID of all versions of ACDC computed on the compact variant? Perhaps that’s why I’m confused My understanding was that a SAID of an ACDC node was of the un-compacted node. And, since any chained nodes are linked via SAID then the SAID of any given node will include the SAIDs of any chained nodes. So the SAID of all versions of an ACDC are computed on the un-compacted variant of a given node and the SAID of any chained node. Does that make sense?
Hmm
SAIDs of chained nodes are in the edges section.
I hope I’m understanding your question here.
Yes I just don't like the mixing of compact and uncompact saids
The presence of SAIDs in the edges section is why I am thinking that the SAID of a credential is always of the uncompacted node plus any edge SAIDs.
Is there a way to prevent a blinded split data attack? I don't know what else to call it but consider this: As a malicious participant, I can create a blinded aggregate array in an ACDC that conforms to a schema such as
    "A": {
      "oneOf": [
        {
          "description": "Attribute aggregate digest",
          "type": "string"
        },
        {
          "$id": "",
          "description": "Attribute aggregate array",
          "type": "array",
          "minItems": 1,
          "maxItems": 3,
          "items": {
            "anyOf": [
              {
                "type": "object",
                "required": ["d", "u", "i"],
                "properties": {
                  "d": {
                    "description": "SAID of disclosable data",
                    "type": "string"
                  },
                  "u": {
                    "description": "Salty nonce",
                    "type": "string"
                  },
                  "i": {
                    "description": "Issuee AID",
                    "type": "string"    
                  }
                },
                "additionalProperties": false
              },
              {
                "type": "object",
                "required": ["d", "u", "claim"],
                "properties": {
                  "d": {
                    "description": "SAID of disclosable data",
                    "type": "string"
                  },
                  "u": {
                    "description": "Salty nonce",
                    "type": "string"
                  },
                  "legalName": {
                    "description": "Legal name",
                    "type": "string"    
                  }
                },
                "additionalProperties": false
              },
              {
                "type": "object",
                "required": ["d", "u", "age"],
                "properties": {
                  "d": {
                    "description": "SAID of disclosable data",
                    "type": "string"
                  },
                  "u": {
                    "description": "Salty nonce",
                    "type": "string"
                  },
                  "age": {
                    "description": "Age",
                    "type": "number"
                  }
                },
                "additionalProperties": false
              }
            ]
          }
        }
      ]
    }
and then I can create a blinded aggregate that contains three entries, but with age omitted and two, different legal names. If someone asks me to disclose my legal name, I can choose which to display. This seems very bad.

To alleviate this, when presenting the blinded array, could we not label each SAID with the data label used in the SAD it corresponds to? Maybe this is the intent. I think this also allows for saidifcation of the attributes themselves, so that we can use a SAID and not just a digest in the aggregate value when compacted (again, maybe that was the intent). I'll clarify what I mean in an example in a reply.

Additionally, without the labels, I can't really think of how to prevent this - so does that mean the merkle tree method of disclosing breaks down?
I guess what I'm saying is that I interpreted the data to be presented as either (should even be computed correctly):
"A": "EGsHbUlJ1JSm63A2dzBmrWvLcKVb22_6OD0fL61KuZ3V"
or

"A": [
  "EJgDHAe0lS3dWPB7yT78O2d1xb_AuNecU8VMjykVTd4F",
  "EGcJzlAaalMxlFSfs2DPB7Tx7n3D7EKAWSXJaheVf_-P",
  "EOYM0KDlDPODdiUDL7Xp-XRjr9mif7Dv5ovMSGTRxyTA"
]
or

"A": [
  {
    "d": "EJgDHAe0lS3dWPB7yT78O2d1xb_AuNecU8VMjykVTd4F",
    "u": "0AB9VADfPtCQvFqp-u4BxUvy",
    "i": "ENoxXSSTfy8FDryU0J0av3IdHKqAb6aYBu0fIT5fvqfY"
  },
  {
    "d": "EGcJzlAaalMxlFSfs2DPB7Tx7n3D7EKAWSXJaheVf_-P",
    "u": "0ACsCxwKKCg0C9Hb7OX9ajbZ",
    "legalName": "Jason Colburne"
  },
  {
    "d": "EOYM0KDlDPODdiUDL7Xp-XRjr9mif7Dv5ovMSGTRxyTA",
    "u": "0ACbHEmnqeXUJFMf1G2Dj0BU",
    "age": 43
  }
]
and I am proposing the first two be disclosed like this:

"A": "EAVyu2rAKhx-kV0sVOKMDE4IFHQjmMyllmZfInBygwb0"
"A": {
  "d": "EAVyu2rAKhx-kV0sVOKMDE4IFHQjmMyllmZfInBygwb0",
  "i": "EJgDHAe0lS3dWPB7yT78O2d1xb_AuNecU8VMjykVTd4F",
  "legalName": "EGcJzlAaalMxlFSfs2DPB7Tx7n3D7EKAWSXJaheVf_-P",
  "age": "EOYM0KDlDPODdiUDL7Xp-XRjr9mif7Dv5ovMSGTRxyTA"
}
Does this make sense?
nuttawut.kongsuwan
I believe it is Sam’s intent to design `selective disclosure` in ACDCs this way. I think what you propose is basically what Sam called `partial disclosure`, and the small “a” already supports it. This paragraph is from the ACDC specification:
The primary difference between partial disclosure and selective disclosure is determined by the correlatability with respect to its encompassing block after its disclosure. A partially disclosable field becomes correlatable to its encompassing block after its disclosure whereas a selectively disclosable field does not. After selective disclosure, the selectively disclosed fields are not correlatable to the so-far undisclosed but selectively disclosable fields in the same encompassing block.
nuttawut.kongsuwan
I don’t have a good answer to your “blinded split data attack”. However, it seems to me that it is a problem with an issuer issuing bad/problematic credentials from the beginning, not a problem on the credential holder’s side. Most credential issuers would have an incentive not to issue such problematic credentials. They may suffer from a loss of reputation once validators found out about this “blinded split data attack”. The validators may even choose to reject all credentials from the bad issuers. So a pragmatic solution to your problem is to blacklist all the bad issuers. A credential holder may indeed have the incentive to present such a credential maliciously. However, the credential holder cannot do so if their issuer does not allow such a credential to exist from the beginning.
How does the ecosystem prevent issuance of such a credential?
Anyone with capacity to issue can create one, I believe
‘Most issuers wouldn’t’ doesn’t seem like a strong enough answer
Let me clarify - in my world nothing prevents an issuer from holding the credential they issue (claims are not the same as participant auth acdcs and issuer trust should still be considered by verifiers)
So a malicious actor isn’t necessarily giving out credentials they may just be asserting claims
And while our application would not permit constructing such an ACDC I still feel like just adding those labels fixes things so that these bad ACDCs just can’t exist; but maybe I missed something from the spec. I will re-read partial disclosure
Actually do the labels even fix this?
Even if you have the traditional setup, administrative exploit will someday make this a reality I believe
I was trying to use a schema to prevent it
Maybe I didn’t try hard enough
I guess the labels do solve most of it
nuttawut.kongsuwan
This is probably not the answer you are looking for. I think the problematic part in your schema is the “anyOf” key word. An ecosystem may agree on schemas that do not allow “anyOf” key word to avoid that problem. A good example is GLEIF where they define vLEI credential schemas and ecosystem governance framework. GLEIF also decides if someone is fitted to be an issuer and can kick the issuer out of the ecosystem if they misbehave.
Ah, yes I thought I got that from the spec actually
Yeah, it's in the example in the _selective_ disclosure ACDC IETF draft
oh maybe `uniqueItems`
I didn't notice that the first time
Nah, `uniqueItems` doesn't work
If the values differ
If it only cared about keys for objects it would be fine.. ie, `uniqueItemKeys` or something, but that's probably never going to happen
This would be much better served as a GitHub issue. Sam sees and participated in those, not in Slack. Plus much easier for archive and tracking the future
Makes a lot of sense, I'll clean up my thoughts and post there later this weekend if necessary after reviewing the ACDC spec one more time
Probably an issue in the ACDC spec repo. Because this sounds like a discussion that might change the spec
Done:
Anyone know why we are verifying a SAID in a reply message immediately after creating it? The other event messages don't seem to do this
Also why doesn't a `bar` message have a timestamp?
That is not creating the SAID, it is loading it from the 'd' field.
`bar` messages are purely experimental at this point. There are no actual uses of them that I am aware of.
Here's the line two lines prior, Phil:
Thanks for the response
I'm not going to be able to attend the Dev call this morning. I implemented credential issuance and presentation in KERIA last week and right now am working on searching, ordering and paginating in the credential list API.
apparently my mic has died?
I was talking to yiou guys lol
hah
I have one question in regard to what Sam said in the recording: If I use the same salt, the keys that are generated should always be the same and if they are always the same, the incepted identifiers should also always be the same (assuming the configuration, e.g. witnesses is the same). Then, how is it possible to use the same salt to generate multiple different identifiers?
Using a form of a hierarchical deterministic key chain by modifying the “path” we generate that is an additional argument to the libsodium key creation algorithm. We have 2 counters we use when generating the path, one is based on the identifier’s index in a given wallet and the other is based on the key’s index in a given identifier.
This way we can recreate all keys in the chain from the salt at any given time.
nuttawut.kongsuwan
I would like to clarify my understanding of , and I would appreciate it if anyone can correct or confirm my understanding. “Salty Keys” and “Randy Keys” sections mention that they use `X25519 encryption key generated from the passcode`. Since `X25519` is a key exchange algorithm, my initial understanding was that Signify and KERIA agents exchange their key with an ECDH protocol. However, this sentence indicates that this is not the case:
For all key generation methods, the Signify Client creates and signs all KERI events, credentials, etc. ensuring that unencrypted private key material never leaves the client.
So I dug a bit deeper and found this blog , which is mentioned in keripy/ref/CypherSuites.md. My understanding from reading this blog is that, in SKRAP, X25519 is not used for key exchange but for plain encryption/decryption (with AES?) at Signify Client. The passcode generated at Signify is then used as a `ephemeral_secret`  to generate a `ephemeral_share`  and `shared_secret`. Although they are called shared secrets due to the terminology used in Diffie–Hellman key exchange, they are solely kept in Signfy Client and *never* shared with KERIA.

Is this correct?
Screenshot 2566-07-11 at 09.16.49.png
rodolfo.miranda
Yes, that is correct. The User AID in Signify has one ed25519 key derived from the passcode salt. The corresponding X25519 key is used to encrypt any other private keys generated at client side before sending to KERIA for persisted storage. So KERIA has no way to see your private keys decrypted.
nuttawut.kongsuwan
Thanks!
Just for my understanding as well. With this process there is no such way as changing the passcode salt, right?
nuttawut.kongsuwan
Phil also wrote a section on passcode rotation. Is it what you are looking for, or did you mean something else?
Yes, thank you. This is what I was looking for.
nuttawut.kongsuwan
Not sure if this also answers your question in the previous thread where you mentioned leaking salt.
I think it is. If the salt gets leaked (basically the password) it can be rotated according to this doc you shared.
rodolfo.miranda
when you rotate your passcode, you need to decrypt all private keys with the old passcode, encrypt with the new one, and send back to KERIA
Yeah, I see. I wonder what the process would be, if there is no salt at all - just one device generated keypair. I currently search for a way to allow me to derive multiple keypairs from one initial keypair.
nuttawut.kongsuwan
Isn’t that what hierarchical deterministic key does?
rodolfo.miranda
The so called "Salty" keys does that. Use just a single salt, and then derive all others deterministically
Okay, thank you!
nuttawut.kongsuwan
<@U03P53FCYB1> Do you know what Randy and Sandy stand for? My chatGPT doesn’t know what they stand for :stuck_out_tongue:
rodolfo.miranda
We should ask Phil or Sam what's the story behind the names :grinning:. I think they are just fancy names, Randy for fully random, and Sandy for HD keys from a Salt
To be more precise, I think this section is what I am looking for. However it seems to be a WIP.
nuttawut.kongsuwan
So this is the one.
Screenshot 2566-07-13 at 21.59.16.png
I am a bit confused. There seem to be a cesr repository but it seems like the exact same code is also copied into the keride repository. Which repo is being maintained in the future?
cesride
I have new acdc/keri example code. I was going to put it in `cesride` and I realized it depended on `parside`, making the dependency graph circular. Unsure exactly how to proceed - I could put the code in `parside` for now I suppose, but I feel like it is easier to discover in `cesride`.
I made a new repo in my personal account for my KERI/ACDC `Rust` code. Here's the initial PR: The code can do a fair bit, and the example really only exercises nested partial disclosure and most compact ACDCs. The other big, not strictly necessary, feature that is supported is partial rotation. cc <@U05DT8YPEG1> for an example of a basic (unpersisted) wallet For anyone interested, you should be able to check out the branch and run this command (with a Rust toolchain installed):
cargo run --example acdc-e2e --release
Still transferring events and acdcs directly from data store to data store, but it should be clear that the presenter could sign another encapsulating message, IPEX could be implemented on top of that, etc.
And currently the data store interface wouldn't support, say, recovery. One would need to be able to support two versions of a sequence number I believe, and the store is so simple in the example that these structures are just lists.
Also, there is a lot of text conversion in the code - I'm going to work to eliminate that and use the primitives as much as possible. The code kind of integrated like this during development.
rodolfo.miranda
just checking code briefly seems that you have implemented many keri agent functionalities. Are you handling witness receipts and msgs?
andreialexandru98
<@U056E1W01K4> Nice stuff!! :eyes: How did you go about multisig message exchange? I see the quadrlet abstraction but doesn't seem clear to me how that is handled :thinking_face:
Are you alright we scrape your repo for KERISSE?
I'd hold off until I actually merge that PR, there is a bit more work to be done. I'll keep you posted
I didn't implement multi-sig yet, but we want it (if for no other reason than to aggregate multiple identifiers we bind for different purposes to the same user/org using a sith of 1 - unless I am misunderstanding what you mean by multi-sig), and witnessing is in its infancy though it's pretty simple to finish off. Our production code runs in a multi-tenant way, where you can keep vaults resident in memory if that's desirable (high frequency issuance), encrypted in Redis actually so that a typical server can just lock the correct Redis DB. I plan to leverage this to provide a high performance witness pool. I think right now, the verification code won't accept a non-transferable identifier so that's the first thing to add. Aside from that, the receipt message creation method is in place so it would then be a matter of parsing that message.
One other thing I need to ask Sam about, is that I stopped signing ACDCs since the signature on a Key Event that references the ACDC and Transaction event should be enough. The thing I am not sure about, is that I didn't add another seal. I figured the existing one that had the ACDC said in the `i` field was sufficient for the check.
(I believe the `i` field is the ACDC said and the `d` field is the VC Transaction Event said)
:+1:
I thought that the `d` field is the digest of the event so it can be referenced as `"a": [<d>]` in the kel instead of `"a": [{i: <said>, s: 0, d: <said>}]` . To reference the VC transaction event, one can use the sequence number of the tx event in the log
But yeah, according to this documentation it could be both, the digest of the seal event and the digest of the tel event.
> When the ACDC is registered using an issuance/revocation TEL (Transaction Event Log) then the issuance proof seal digest is the SAID of the issuance (inception) event in the ACDC's TEL entry. The issuance event in the TEL includes the SAID of the ACDC. This binds the ACDC to the issuance proof seal in the Issuer's KEL through the TEL entry.When the ACDC is not registered using an issuance/revocation TEL then the issuance proof seal digest is the SAID of the ACDC itself.
If the above is true I would like to add a question. What for is the sequence number? Because the sequence number of the TEL is already referenced by the `d` field that points to the exact TEL event which includes the sequence number.
(1) If the registry is of the `NB` type, does it have the sole purpose of verifying that there are no backers other than the issuer?
(2) I have another interesting question ... Let's say someone wants to issue 500 ACDC's over the next 12 months. What would be the best approach to do this? I know I could anchor each ACDC by it's own but then the KEL would have 500 anchors by the end of the 12 months (this sounds not like a good idea). Another option would be to use bulk issuance of the ACDC's however in that case I would need to know upfront the details of the 500 ACDC's. Is there a way to efficiently issue a large number ACDC's over time?
That's a good question. My plan was to scale out issuers in a hierarchy using delegation of authority through ACDCs and chaining to mitigate this - but yes, I expect the KELs to grow large as ACDCs are issued.
rodolfo.miranda
the anchor in the KEL even has `i`,`s` and `d` the same as the `i`,`s` and `d` of the TEL event. On the TEL event, the `i` is the SAID of the VC, `s`=0 for issued and 1 for revoked, and `d` is the SAID of this TEL event.
^ exactly, so my question is - by protocol, am I supposed to be attaching another seal specifically for the ACDC independent of the TEL seal, or can I use the TEL seal to transitively verify the ACDC said?
for the s = 0 case
I expect we can be efficient and just use the fact that the TEL is valid
rodolfo.miranda
I think we are not attaching any extra seal. Just the TEL is enough
Do chained ACDCs also need to be anchored in a KEL?
Yes but those KELs would likely be shorter
Sorry, but why is that so? Let's say we do the following. 1. Create root ACDC 2. Create chained ACDC Then (2) is connected to (1) via the `n` field but both are normal ACDCs that would need to be anchored via a sealed event anchor.
Or do you just mean the KEL's would be shorter because now not only one KEL is used but multiple because of delegation?
Well you’d be delegating fewer issuers than you’d be likely to issue with those issuers
But also you could fan out the delegation issuers
I also considered issuing from short lived identifiers
Rotate to null after N issuances and spin up a new issuer. The only thing is - how can you revoke in that case?
If you don’t need revocation, maybe it is sufficient
Hm .. I see, thank you!
rodolfo.miranda
How much does a very long KEL affect performance? do you have an order of magnitude for "N" in mind?
rodolfo.miranda
Leo (1) yes. (2) anchoring each event is not mandatory. You may anchor every N events of the TEL since every event reference the previous said. I think we raised the issue in keri and dev meetings, but it worth discussing with more details again.
Wait but the TEL normally only has two events, right? One event for issuing and one for revoking.
I assume you are referring to chained ACDCs. So if we have a chain of `A<-B<-C<-D<-E<-F<-G` only let's say A, D and G need to be anchored.
Very long KELs affect transfer time during verification, could be a problem for constrained or remote devices. Rodolfo are you referring to management or VC TEL events? I think Leo is correct about VC TELs.
rodolfo.miranda
YEs, you are right. I'm thinking on the registry itself and Sam's idea of having like a merkle tree of credentials
Yes, I see. However in this case the ACDCs would need to be signed.
Yes, otherwise a commitment to them wouldn't exist for a period of time when they should be valid
And then you'd be creating interaction events anyway, for the ACDC signatures, no?
I think this is an interesting question to ask in the dev call. How I imagined it, you would not need to anchor the signatures in the KEL. As long as you send the signatures with the each ACDC in thre tree to verify for a verifier, you'd only need to anchor the root of the merkle tree with the `rd` field.
<@U056E1W01K4> I just listened to the last recording of the dev meeting and I heard that Phil also stopped signing ACDCs for the same reason you mentioned.
:+1:
I see what you mean. If you aren't creating them all (or presenting for the first time) at once, you won't be able to verify commitment until the merkle root is published, unless you can just assume to use the most recent key generation for verification until the merkle root is published - I suppose that would work
We put together an agenda for tomorrow. There is a lot on it, while not everyone was involved in the creation of issues we’re hoping by highlighting it in the call we can provide transparency into how we’ve been approaching development recently.
Hey everyone, I'm available to help with contributions. My initial thought was to jump on a Typescript implementation but I quickly got overwhelmed with all the moving parts. If there's low hanging fruit like documentation, tests, or succinct issues on the TS libs (signify etc) I can be of use and would be a good way for me to onboard. I'd also like to attend today's meeting but can't seem to find the time?
The meeting is in 20 minutes at 7:00am PDT
rodolfo.miranda
I couldn't get witnesses running behind `https` . Seems that `keripy` looks only for `http` urls for witnesses, see I also try to modify keripy to accept `https` schemas but got other weird errors like `nodename nor servname provided, or not known`. <@U024CJMG22J> do have any clue so I can try harder without going deep into `hio` or I just open an issue in keripy with a feature request?
The only way to fix these things is to go deep into hio. Is this a pressing issue for you?
The witnesses should terminate HTTPS themselves which will require changes to their command
rodolfo.miranda
The second error was my mistake, I was able to get it running. I'll try if a simple fix in keripy to cope with http and https will work. I'm terminating https in an AWS API gateway, so witnesses don't need change. Thanks
It would be more secure to terminate at the witness
rodolfo.miranda
It may also depends on how is the architecture. AWS provides some nice features if you catch https up front.
I guess, as long as you trust everything running in their data center between their load balancer and your VM.
I have to agree with Phil - decentralized identity less about having your channels connected everywhere and more about who is in control of the channel itself - an AWS employee with account privileges is not your friend. I don't mind looking at `hio` <@U03P53FCYB1> if you want another set of eyes. Can you give me a starting point? Maybe some test command lines and a branch?
rodolfo.miranda
I found the solution, I'm writing a PR in `keripy`that I will share soon for your review. `hio` is fine
:+1:
rodolfo.miranda
PR submitted:
Do you have a command line I can test this branch with? A `kli` command? Maybe a better question is - I don't see new params for any `kli` commands - so is there a client you are using to exercise the server code you changed?
(for new functionality, i guess the old is probably able to be exercised with the stock kli?)
or maybe it's all paramaterized
I have set up my keri witness as a docker service in a `docker-compose.yml`. It can be reachable, but it cannot sign a receipt for `inception` event. I'm getting an `raise kering.ConfigurationError(f"unable to find a valid endpoint for witness {pre}")` error. My guess is the `kli init` configuration file.
{
    "witness_0": {
        "dt": "2023-02-24T12:57:59.823350+00:00",
        "curls": [
            "",
            ""
        ]
    },
    "dt": "2023-02-24T12:57:59.823350+00:00",
    "iurls": []
}
What should be replace the `127.0.0.1` to use it as docker service ?
Do you get any response if you try `curl -v http://<witness-ip>:<witness-port>/oobi/<witness-aid>/controller` ?
yes, the curl response works fine. during the `kli init` the oobi can be resolved as well. But it seems the witness container cannot respond back to the `kli incept` event of the local AID which is in different container.
I had a issue with the docker network. I used `extra_hosts` in docker compose to solve the issue.
<@U04BRS1MUAH> Your `kli init` and `kli incept` are executed on the `host` level, is that right? And your `witness` are working as containers?
Hello Can anyone elaborate why cesride and keripy use `argon2` for key stretching instead of `hkdf`?
charles.lanahan
From the paper: > A good algorithm for key stretching is Argon2 [15; 114] which lead to two links • I feel like there was also some justification elsewhere in some of the papers but I can't remember where off the top of my head. I don't remember anything particularly about why argon2 would be better or worse than hkdf.
Thank you <@U05H2PS5U6Q>! I have an assumption. I believe argon2 is more suitable for stretching passwords (low entropy) and hkdf is more suitable if you already have key material like an ed25519 private key (high entropy).
charles.lanahan
Hmm maybe, (I'm no cryptographer, just an amateur who is interested). This answer seems reasonable based on my understanding though. Argon2 seems to "cost" a lot more than hkdf.
charles.lanahan
hmm then I read the other answer. Too deep for me actually. Maybe someone more knowledgeable will be able to answer. Certainly someone would be able to answer if you ask at the weekly keri meeting.
Ah ... actually this forum post clarified it a lot!
Hey. So I have another question regarding rotation events. Here is a valid example from the keri specs:
0 	Crnt 	[A0, A1, A2] 	            [1/2, 1/2, 1/2]              // Signing (current) keys of the inception event
0 	Next 	[H(A3), H(A4), H(A5)] 	    [1/2, 1/2, 1/2]              // Rotation (next) keys of the inception event
1 	Crnt 	[A3, A4, A5, A6, A7, A8] 	[0, 0, 0, 1/2, 1/2, 1/2]     // Signing (current) keys of the rotation event
1 	Next 	[H(A9), H(A10), H(A11)] 	[1/2, 1/2, 1/2]              // Rotation (next) keys of the rotation event
Now let me modify this example a bit by changing the thresholds of the rotation keys in the inception event and using just one of the previous next keys to rotate:

0 	Crnt 	[A0, A1, A2] 	            [1/2, 1/2, 1/2]              // Signing (current) keys of the inception event
0 	Next 	[H(A3), H(A4), H(A5)] 	    1                            // Rotation (next) keys of the inception event
1 	Crnt 	[A4, A6, A7, A8] 	        [0, 1/2, 1/2, 1/2]           // Signing (current) keys of the rotation event
1 	Next 	[H(A3), H(A10), H(A5)] 	    1                            // Rotation (next) keys of the rotation event
As you can see, I changed the rotation thresholds to *1* so *any* of the given rotation keys can sign a rotation.
As the rotation threshold is now 1 so any of the rotation keys is able to sign the rotation, I also only use one rotation keypair in the rotation current keylist (namely A4 instead of all three previous next keys A3, A4 and A5 as in the specs example). This allows me to only generate one new rotation keypair (namely A10) and keep the other rotation keys (namely A3 and A5) secret and unchanged.

*Question*:
(1) Is this valid? Or in other words, do I need to change all rotation keys in a rotation event or can I keep some of the old keys in case the threshold allows it?
(2) If this is valid, I assume it was unnecessary to expose all three rotation keys in the specs example as two (1/2 + 1/2) rotation keys would already have fullfilled the threshold?
1. Yes this is valid. 2. Yes, only a next threshold satisfying number of keys (with signatures) need to be exposed in a rotation event.
Thank you a lot <@U024CJMG22J>!
<@U024CJMG22J> just to clarify. In the rotation event we always have two "types" of signatures attached. 1. The signatures of the current keys with signing rights to reach the current threshold 2. The signatures of the previously established next keys with rotation rights to reach the previous next threshold Is that correct?
Yup, you got it
Another question .. is the following correct?
0 	Crnt 	[A0, A1, A2] 	            [1/2, 1/2, 1/2]      
0 	Next 	[H(A3), H(A4), H(A5)] 	    1                      
1 	Crnt 	[A4, A6, A7, A8] 	        [0, 1/2, 1/2, 1/2]                            
Signing with `A4` means `index = 0` and `ondex = 1` .
Signing with `A6` means `index = 1` and `ondex = undefined` because this key is not specified in prior next list.
I believe you are correct but I seem to recall ondex defaulting to the index value when left undefined so it depends on whether you mean in the CESR encoded data, or the method calls - `Indexer` code will probably tell you the answer.
I just checked, it will eventually default to the index value. Here's the line:
Yes, that's true but checkout this example for signing with `A6` .
0 	Crnt 	[A0, A1, A2] 	            [1/2, 1/2, 1/2]      
0 	Next 	[H(A3), H(A4), H(A5)] 	    1                      
1 	Crnt 	[A3, A6, A7, A8] 	        [0, 1/2, 1/2, 1/2]     
Signing with `A3` means `index = 0` and `ondex = 0` . In other words, ondex does not need to be specified, it turns into index in the code.
Signing with `A6` means `index = 1` and `ondex = undefined`. But if undefined eventually turns into the value of index, meaning 1 then it makes no sense anymore.
Right but if the diger isn't present (in the prior next list) you just ignore the key
and the ondex
I know what you mean though
conceptually it's odd
Ah. Okay, I see.
hm ... I would have never thought about that.
Thank you!
No problem
I think turning ondex into index was initially meant as a helper so for lists were current and prior next keys are the same you would not need to add index and ondex (only index because they are the same). Maybe it makes sense to remove this helper and really have an ondex value of undefined in this case.
But then I guess a lot of the existing signify code would break.
nuttawut.kongsuwan
I am studying CESR and am a bit confused about the date-time section. I would appreciate if anyone can help. One real example I found is `-EAB0AAAAAAAAAAAAAAAAAAAAAAA1AAG2023-07-20T17c16c19d807575p00c00`, and I then try to match it with the CESR specification. If I understand correctly, it has the following components: • `-EAB` # count of “fn+dt” couples -> which is equal to one in this case • `0AAAAAAAAAAAAAAAAAAAAAAA` # 128-bits number (24 characters) —> which is zero in this case • `1AAG2023-07-20T17c16c19d807575p00c00` # ISO-8601 DateTime (36 characters) where date-time corresponds to `20 July 2023`, `17:16:19.807575` (hh : mm : ss . milliseconds), and `p00c00` is `UTC+00:00`. My guess is that `c` is colon `:` and `d` is dot `.` and `p` is plus `+`. Here are my questions: • Is the above correct? • What is the purpose of the 24 characters? • What is “fn” in `-EAB`? I could not find its description in the CESR specification.
Create question for the KERI spec calls :stuck_out_tongue:
or as an issue on the CESR repo (for the history)
nuttawut.kongsuwan
Sure! Shall I copy paste the question in Discord as well?
It is all a bit confusing isn't it, honestly the best place would be the tswg-acdc-tf in toip slack :slightly_smiling_face:
you don't have to, no
nuttawut.kongsuwan
<@U056E1W01K4> I think I can clarify this a bit now. We don't even need to check if the diger is present or not as you mentioned. The thing that matters is not the ondex itself but the code. If the code for example is `Ed25519_Big` then it means that the given signature is a rotation signature. In the code it says the sig appears in both lists, so current and previous next.
However, personally I think the naming convention here is a bit unfortunate. Because just from reading the name `Ed25519_Big` it is not clear that it is a rotation signature.
So if the code is a big signature we look at the ondex, otherwise we ignore it.
Ahhhh
Thanks for clarifying that
I actually think that’s probably how my code works
I wasn’t reading the whole thing :man-facepalming:
haha, yeah me too. i just did not think that the code matters