Slack Archive

To Index

Back To Top
<@U013SSREL0H> has joined the channel
juan.caballero
<@U01304JUQP9> has joined the channel
<@U01384U5KM4> has joined the channel
carsten.stoecker
<@U013E8JS33L> has joined the channel
robert.mitwicki
<@U013F46SDRR> has joined the channel
<@U024CJMG22J> has joined the channel
<@U024KC347B4> has joined the channel
Hi!
:wave:
juan.caballero
hello!
/github subscribe WebOfTrust/keripy
:white_check_mark: Subscribed to . This channel will receive notifications for `issues`, `pulls`, `commits`, `releases`, `deployments`
[WebOfTrust/keripy] Pull request merged by m00sey
/github unsubscribe WebOfTrust/keripy
Unsubscribed from
michal.pietrus
<@U02N7K951DW> has joined the channel
:wave:
michal.pietrus
:tada:
<@U02MLCPJ85A> has joined the channel
:wave:
Welcome <@U02MLCPJ85A> !
<@U02MD0HA7EJ> has joined the channel
<@U02N0K6LL93> has joined the channel
<@U02PA6UQ6BV> has joined the channel
thomasclinganjones
<@U02Q3A81HA5> has joined the channel
<@U035D255M0R> has joined the channel
<@U035JNUPF1V> has joined the channel
<@U035WESCM0V> has joined the channel
<@U035ZHBF21H> has joined the channel
<@U035R1TFEET> has joined the channel
<@U036FDVV3GV> has joined the channel
After changing our branches and PR strategy this week, we have our first PR to the `development` branch to get us a more stable `main` branch. PR was just merged into `development` with additional documentation, lots of clean up and work to make sure all scripts under `demo/scripts` work as advertised. As per our new procedures, this won't make it into `main` until we have our next Community meeting and agree to promote `development`. So if you are looking for stable demo scripts, you can now use `development` until then. Enjoy!
<@U024CJMG22J> I am looking into Sphinx / autoDocString using Vscode. I try to understand which warnings / errors actually wreck the end-result.
New pull request being merged into the `development` branch this morning with updates to the credential issuance scripts in `scripts/demo/vLEI` to include the new QVI Auth Credentials. These credentials are issued from a Legal Entity back to their Qualified vLEI Issuer to authorize the issuance of role credentials. This change will require the latest version of the repo `dev` branch which contains the new schema and changes to the old schema to include the new auth credentials in the chains.
Have the scripts been changed to issue the credential from the LE to the QVI?
Yes
petteri.stenius
Hi all. While using Keep and going through Root GAR inception process we run into this lmdb MDB_MAP_FULL issue causing "kli agent vlei" to crash. keripy is on main branch.
image.png
~<@U03U37DM125> sometimes happens when two agents are trying to to open the same key store~
The MAP_FULL exception is raised when the LMDB exceeds its allocated size. By default that is only 10 megabytes which is too small for most applications. That value can be changed in baser.py but in general we need to make the default higher and make the value a configuration parameter somehow.
petteri.stenius
Yes I can see that one of the databases for one of the keri agents has size of 10 Mb. This is a bit weird since I started from empty by first removing all databases. I was using Keep and was completing create multi-sig root gar group step.
<@U03U37DM125> can you try replicating steps to reproduce and open an issue for us?
petteri.stenius
Sure. I'll create an issue on github with the information I have.
<@U02PA6UQ6BV> In today’s KERI concepts meeeting I asked: ‘In the keripy repo, in tests/vc/test_protocoling.py in the test : test_issuing what is the meaning of “sid”, “wan”, “red”?’ Phil answered: ian: issuer of cred (start with ‘i’) sid: signer of cred (start with ‘s’) wan: witnesses (start with ‘w’) red: receipient of cred (start with ‘r’) Perhaps that can go into a “keripy glossary”?
We currently have a central glossary And I categorize them into 9 categories, based on a (fuzzy) count of words in in plain text (or markdown). So yes, issuer, signer, witness etc. are all over the place. This seems to me an item for the Q&A and / or the HowTo? What do you think?
ian, sid, wan and red will create numerous false positive in a fuzzy search once they’ve been added to the glossary (3 letter words, lots of hits, multiple meanings)
Good point about the fuzzy search perspective. HowTo seems reasonable (unless adding those 3 letter words screws up search).
michal.pietrus
the keripy approach to naming variables is internal to keripy. Although it happens from time to time that the names appear on the meetings, ie. habery, habitat or these mentioned above, they are implementation detail in essence. If something is not keripy specific, it shall be promoted into official KERI concept and otherwise not be mentioned in any glossary (OK, maybe except keripy-specific glossary, if it'd exist).
a keripy-specific glossary would be useful to help navigate the quirky naming schemes
Sure, I am creating this using github pages
test environment only have a look at the sidebar menu
These are all the nits of the glossary against the recent IETF CESR draft
hits
The HowTo I am currently creating is more focussed on the WOT-terms themselves. But I do agree that a useful elaboration of the Concepts.md (still early draft) would be a practical ‘HowTo’ or ‘HowDone’ :slightly_smiling_face:
Can you tag glossary entries (acronyms) with "keripy"?
I could add a column in the WOT-terms-Manage sheet and then run the counting-terms match tool over the keripy repo. The challenge is : there is not much of white paper text or other plain text to scan through?
Terms-WOT-manage.png
Repo text :smiley:
Yes _repo text_ is a generally a special kind of text: it looks reasonable and sounds like English (but it isn’t), it leap frogs through concepts, leaves out boundary conditions for what’s been stated, likes to have at least 20% broken links, and so on. So yes, repo text deserves a characterization, you either love it or hate it :stuck_out_tongue_winking_eye:
rodolfo.miranda
Hello!. I'm making my first steps with keripy. So far so good with the basic demo scripts :grinning:. I notice that the swagger api exposes a server-sent events method. What type of notifications are sent to the controller? is there documentation regarding the events? Also, I see a /notification method, is that for querying same notifications?
Hi Rodolfo- We upgraded the SSE endpoint and the notifications recently to match more of a Firebase approach. Anything coming over the SSE channel is just a ping to a live client to refresh some other data set. The Notifications are a persistent dataset that are messages to the controller of an agent to take some action. They are persistent and need to be managed (marked as read, deleted) by the controller of the agent. So right now, the main use of SSE events is to let the controller of an agent know that there are new notifications and that a client should reload that dataset. Sadly we have not documented the SSE events yet.
rodolfo.miranda
thanks Phil, that's good enough. Another quick question, when starting the witness agents with `kli witness demo` ,those three witnesses are deployed with controller ports?
Yes, they all have preconfigured ports. If you look in the demo-witness-OOBI.json files in the data directory under scripts you’ll see OOBIs for them with the port numbers
rodolfo.miranda
from:
usage: kli witness start [-h] [-V] [-H HTTP] [-T TCP] [-n NAME] [--base BASE] --alias ALIAS [--passcode BRAN]

Runs KERI witness controller. Example: witness -H 5631 -t 5632

options:
  -h, --help            show this help message and exit
  -V, --version         Prints out version of script runner.
  -H HTTP, --http HTTP  Local port number the HTTP server listens on. Default is 5631.
  -T TCP, --tcp TCP     Local port number the HTTP server listens on. Default is 5632.
  -n NAME, --name NAME  Name of controller. Default is witness.
  --base BASE, -b BASE  additional optional prefix to file location of KERI keystore
  --alias ALIAS, -a ALIAS
                        human readable alias for the new identifier prefix
  --passcode BRAN, -p BRAN
                        22 character encryption passcode for keystore (is not saved)
rodolfo.miranda
is the HTTP a port for controller requests? does it have a swagger definition as in the `agent`?
Witnesses do not expose an API. Their HTTP port is simply for receiving CESR over HTTP requests for KEL / TEL events
(And also exn events for both `/fwd` and `/qry` CESR events)
rodolfo.miranda
ok, same for watcher I guess. What I'm looking to achieve is extending a watcher/witness to write the key events to the blockchain. What is the best way to get or retrieve those events from the watcher? Probably, it may be better to discuss that in one of the calls.
Watchers are not developed yet. After we deploy the vLEI in November, watchers are the next thing on our TODO list. If you wanted a witness to do that, I'd recommend looking at the `cues` that come out of the Kevery and Tevery objects. They are an event queue that signals the completion of an event.
rodolfo.miranda
Nice, I'll explore the `cues` thing first. Thanks
There is code in repo for responding to TEL events off of the Tevery.cues queue. Specifically it is reacting to receiving credential revocation events, but the exact same approach would be used for KEL events off of Kevery. It is the TeveryCuery class in serving.py
A related question: Has PTEL become obsolete? Sally seems to have taken its place. Or otherwise formulated: What problem does solve? I tried really hard to think of one, but couldn’t find one other than . Do we really need to specify PTEL, or is it a ‘must’ do implementation and public use of a TEL to get the functionality we want : issuance and revocation?
rodolfo.miranda
I have a bunch of questions that will help me understand better the architecture and clarify some concepts. I’ll throw it here, but let me know if do you think it’s better to handle questions in a different way to not overload this channel.
rodolfo.miranda
1- I understand that agent and witnesses communicate via CESR over the TCP port. Correct? 2- How can I retrieve the KEL from an agent? For example in the Alice-Bob demo, is there a way for Bob to retrieve Alice KEL or current state? 3- As you mentioned, the witness exposes also an http port for CESR over http requests. What’s the purpose of having two CESR channels? What type of requests can be submitted on the http? Can I query the KEL and KERL of the witnessed agent from there? 4- is there a way to turn on more verbose logs on the witness to “see” what’s going on? If I run `kli witness start` seems that there’s no way to check if the witness is operating properly
No Henk, they are not really related. PTEL is a public transaction event log that can be used to securely track state anchored to a KEL Sally is just an implementation of a verification service. It is purpose built software for the vLEI ecosystem to allow participants in the vLEI ecosystem present credentials so the GLEIF Reporting API can show what vLEIs are. issued to Legal Entities.
Thx, pls end check and
rodolfo.miranda
5- I'm also having trouble to add a new witness to an existing agent. How can I generate the OOBI?
Hi Rodolfo- Answer incoming:
1. No, agents can communicate over TCP but all the configuration we use currently is via HTTP
2. Yes, there is a `kli query` command line command that can query a witness for a specific KEL.
3. We provided HTTP to make it easier to use KERI in environments where firewalls only allow HTTP traffic.
4. We have an outstanding issue to improve logging. Right now, all logging is based on the same "log level" and setting that log level on anything above CRITICAL become incredibly verbose because of the logging coming out of event parsing, processing and escrow processing. Our ultimate goal is to be able to configure logging on a per module basis at different levels to allow for targeted logging to discover specific problems which eliminating a lot of noise. I would say my usual, "PRs are welcome" but this is a big nut to crack and we need some time to figure out how to get it right.
5. Every witness exposes a blind OOBI at the path `/oobi` that can be used to introduce that witness into existing infrastructure.
rodolfo.miranda
Seems that the blind OOBI did the magic. Still need something else to add it as a witness
The OOBI is the mechanism to allow a controller to associate an endpoint with an AID. It says, "to contact this AID, use this endpoint (tcp or http)". Then any controller that has resolved the OOBI for a witness just needs to specify that witness by AID in it inception event.
So client configuration needs to provide the witness OOBIs in keripy configuration to allow the AID -> endpoint association and the client needs to allow the controller of an agent to select the witness by AID. In the Keep for the vLEI ecosystem we provide "pools" of witnesses (by AID) that are collections from the GLEIF provided witness network. The OOBIs are in the agent configuration and the user interface allows the controller to select them by "pool" which puts the AID in the `b` field of the inception event.
Flow chart worthy. <@U024CJMG22J> If you draft one quickly on paper with a pencil, and send a photograph of the draft, I hereby offer to take it from there to create a fancy digital one.
We already have this one:
FYI. You could use to create diagrams using markdown, in HackMD or directly in GitHub.
rodolfo.miranda
I’m trying with no success to get an agent running with a single witness without using `kli demo`. The following are my steps: 1- init a db and keystone for the witness 2- incept the witness 3- start the witness 4- start an agent 5- boot agent: create db and keystone, and unlock 6- resolve blind witness OOBI `` —> log `success` 7- incept the agent with only the witness created before 8- agent crash with `ERR: 'http'`
rodolfo.miranda
what I'm missing?
rodolfo.miranda
I think there's something with the blind oobi since it doesn't have urls
{"v":"KERI10JSON0000fd_","t":"icp","d":"EGAOl2cFdAzUmVpMBfXPyN43sIg--ebodoHANM12oW1u","i":"BG5PdBRNDWWbqCF0VEMjX6K77Jy94A0aI7QECVUidnlJ","s":"0","kt":"1","k":["BG5PdBRNDWWbqCF0VEMjX6K77Jy94A0aI7QECVUidnlJ"],"nt":"0","n":[],"bt":"0","b":[],"c":[],"a":[]}-VAn-AABAADjn9mG0CTEJNXotZn0nq-mpV0Yw5923thvoaxmldtj5wzEneVaoSWpEZXMZFH7QkXIEZZvVI8CI_hZNK-8TE0O-EAB0AAAAAAAAAAAAAAAAAAAAAAA1AAG2022-10-12T00c52c22d370749p00c00
rodolfo.miranda
oobis from demo witnesses have much more details
{"v":"KERI10JSON0000fa_","t":"rpy","d":"EPLIonLx_5QWAL7LUanf8Jx2D7tyt799aNPw_gcktdtK","dt":"2022-10-13T11:13:43.367604+00:00","r":"/loc/scheme","a":{"eid":"BLskRTInXnMxWaGqcpSyMgo0nYbalW99cGZESrz3zapM","scheme":"http","url":""}}-VAi-CABBLskRTInXnMxWaGqcpSyMgo0nYbalW99cGZESrz3zapM0BDMmGW-cRRZCG4cAERC5hA9qguWk2FAdYyWSNrImMAD8JPNoLh4vmcntpy5fwfobl1RgZmQtCE1G4THfq1Kee8H{"v":"KERI10JSON0000f8_","t":"rpy","d":"EOKwPkc9V6OK1vEZ6k3m2nVTwGMypQkXkIriTV4YO36J","dt":"2022-10-13T11:13:43.369834+00:00","r":"/loc/scheme","a":{"eid":"BLskRTInXnMxWaGqcpSyMgo0nYbalW99cGZESrz3zapM","scheme":"tcp","url":""}}-VAi-CABBLskRTInXnMxWaGqcpSyMgo0nYbalW99cGZESrz3zapM0BDxhAOgPpf73lPRj8VEPkRLBiQoxSSfHdnwBWZ6UKDGq-Ov_j4s7FDhB4ty6NQ4mkNUSKvh_VUZB1vmhvmiHSMF{"v":"KERI10JSON0001b7_","t":"icp","d":"EHlNJJ1O-s2BlqngkY7mYbVamK0Z7ISkru6TkjLG7yUi","i":"EHlNJJ1O-s2BlqngkY7mYbVamK0Z7ISkru6TkjLG7yUi","s":"0","kt":"1","k":["DC2vlPbLRy-f7gIDWBiabmmtQoJRJbai9nk990OcWdcx"],"nt":"1","n":["EIZ44CF_RRCxHbd4zxdc5QIZQZXNWlwyU636bV9-rXIT"],"bt":"3","b":["BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha","BLskRTInXnMxWaGqcpSyMgo0nYbalW99cGZESrz3zapM","BIKKuvBwpmDVA4Ds-EpL5bt9OqPzWPja2LigFYZN2YfX"],"c":[],"a":[]}-VBq-AABAADu-Una0rLg3DA1d8qZ_LWcP6KEVCWMaAF4UKPGfT9b8ct2g9Q1hejXBfvHZQ90agEGBFr3yu-Bo5_EtIsY0icJ-BADAACbkEz96qyDWS5IlByP-gh_tiNrQf0iiGg9TU8QOrOC1h1hjVs2ar6fMmsCSXe5xStP2uFYi58EEHFtnFNiPFgLABALwJvk7b0tlBMG3WL-ZF1fF1oESsvNh04_9gsLZhxIFh_Opgt_mfKOH-1xeDn2pyHUM4rbiwFQSa7O4zfxRqcDACBbX6tGNnF0kWkLCPUDpVFTxlfnqDaN9THymUj875gHimFhGokaZm05zh4--vLwmBSdTGvP3UyrFfubFn3mBMwD-EAB0AAAAAAAAAAAAAAAAAAAAAAA1AAG2022-10-08T17c08c16d113607p00c00
The configuration for your witness needs to tell it to expose its ports in its OOBIs. Look at `wan.json` for example...
{
  "dt": "2022-01-20T12:57:59.823350+00:00",
  "wan": {
    "dt": "2022-01-20T12:57:59.823350+00:00",
    "curls": ["", ""]
  },
  "iurls": [
  ]
}
under the name of the alias, you add the `curls` section to tell it to expose those ports in OOBIs
rodolfo.miranda
Still struggling with the witness OOBI. I’m initializing and launching the witness with these commands: `kli init --name witroot --nopasscode --salt 0ACDEyMzQ1Njc4OWxtbm9aBc --config-dir ${MY_CONFIGS_DIR} config-file wan` `kli witness start --name witroot --alias wan -H 5192 -T 5193` The OOBI that I get at `` does not have the urls. I tried many things with no success. Hints?
You have to line up the name of the config file with the value you pass to the “name” parameter and the JSON property name with the value of the “alias” parameter
rodolfo.miranda
didn't work either.
Hmm…
What about creating Docker Containers? Anything could go wrong / be different in individual stacks, no?
rodolfo.miranda
In this case I don't think is an environment issue since all test and demo pass fine. Probably is due to myself learning how keripy works.
Agreed, maybe in this specific case it’s not due to the stack, imo there’re a few advantages anyway to use containers: • we’d be able to reproduce a situation (error / pass ) provably based on the same stack and not just because it passes all our tests. • we’d spare a lot of installation and configuration tweaks and lower the entry bar significantly • testing in/from containers
Speaking of containers here’s a PR to fix the Docker build of KERIpy: There was an issue with the `blake3` dependency build needing and not seeing a Rust installation and the `ordered-set` dependency not playing nice with Python 3.9.7 so it upgrades the Python version as well as adds a Rust installation and invokes the Cargo environment prior to the `pip install` so that the install sees the Rust environment.
There’s also some instructions added to the Readme on how to use KLI from the built Docker container.
rodolfo.miranda
Can you validate if the following keripy architecture is accurate?
Captura de Pantalla 2022-10-17 a la(s) 10.24.40 a.m..png
Yes, that looks fairly accurate. One correction, the CESR streaming over TCP is implemented, we just don’t use it currently.
rodolfo.miranda
great. I saw the references in the code but not sure if it was functional. Do all cesr messages go directly to the mailbox and then need to be retrieved and processed?
It all depends on what endpoints are exposed via OOBIs. For all of our current samples (in the scripts directory) and our work with the vLEI, we are exposing witness OOBIs from every controller and controller OOBIs from every witnesses. So when I resolve an OOBI for another, non-witness AID I receive the KEL of the controller which provides the cryptographic commitment to the witness and I receive reply messages with provides BADA-RUN data for endpoints for the witness (who has persistent availability on the internet and thus exposes both TCP and HTTP endpoints). Because I do not have endpoints for the AID but I do have endpoints for her witnesses, I have to send messages to the mailboxes on the witnesses to be forwarded to the AID controller. Those messages can be sent to either the TCP or the HTTP endpoint on the witness but for now we are only using HTTP. Since we are configuring out agents without exposed ports on the internet (or using the command line) we run all agents and command line commands with pollers for their mailboxes. They check all witnesses for any messages that have been forwarded to their AIDs. The pollers pull the messages down and send them through the parser just as if they arrived in any other manner (direct connection via TCP or over HTTP).
What is BADA-RUN if this is BADA:
Thx, “all hands on the Glossary deck”"
rodolfo.miranda
A couple of questions after running alice-bob demo script: 1- If a get the KEL from Alice for example, I see the witnesses' signatures but not the receipts
"witness_signatures": [
    {
     "index": 0,
     "signature": "AACzmKxv-Gttw3GImcJIqwB_l8giL_M95YBFHA3rjx4rMfh6d4KWpa8r4ALRDkjF1FktHrxU0XMMnyHaBWkJUvMC"
    },
    {
     "index": 1,
     "signature": "ABCuC53p3BF8pPWb_i6vo5JAIPSQI3az80OKH9VSejkaS1eLcLxwUKUp1I867oGQ-VyRO-neFsi7CTWGW0RiamAC"
    },
    {
     "index": 2,
     "signature": "ACChnhDjCVEpf2LclwF0ErLPQtFLoVyxxF4nAXItpPqXxFG6aZi2Kpdjwrny9NUriRWC4obDr3MzplSaImoykcYF"
    }
   ],
   "receipts": {},
Shouldn’t receipts be there? Or some extra action need to be performed at witnesses?

2- how can I get Alice’s KEL from Bob's agent? I tried `kli query` but always receive the error `ERR: WitnessInquisitor.__init__() missing 1 required positional argument: 'hby'`
rodolfo.miranda
3- and one conceptual question regarding `demo-witness-script.sh` : what's the role of the `inquisitor`? what's happening there?
Hey Rodolfo, I'll take a look at this today
1. Those signatures represent the witness receipts. The name of that key is unfortunate and should probably be changed.
2. kli query is out of date and needs to be updated. However, a better approach would be to run `kli oobi generate` from the controller whose KEL you want and then `kli oobi resolve` from the controller that wants to import the KEL
3. That script originally had a `kli query` step that was removed. So the inquisitor is a vestige.
rodolfo.miranda
Thanks Phil. So, the value in the signature is the SAID of the receipt?
rodolfo.miranda
And for the KEL, I'll try the approach you mentinoned.
Yes, the display is pulling the witness receipts and showing them in the `witness_signatures` key. But they are absolutely the witness receipts.
As for `kli query` ... we just really need watchers. Once we have watchers, we can OOBI with another AID and then let our watcher know that we are doing business with that AID and to keep an eye on it. Then we merely have to ask our watcher what the current key state is. And when we have watchers and super watchers, my watchers can keep checking with the super watchers for all AIDs they are charged with monitoring.
To avoid any sort of eclipse attack, just add more watchers to make it exponentially harder for someone to successfully attack you.
rodolfo.miranda
watchers watchers watchers :grinning:
petteri.stenius
I have done this. There are a couple of details • the incept config file need to be right as Phil mentioned • it's possible you need to apply this fix to resolve "ERR: http" • when doing incept on the agent i think you need to make sure "toad" parameter matches number of witnesses (cannot be larger)
anwar.hossain
Hi All, I'm trying to deploy a standalone witness. followed the `start-witness.sh` script, but when I tried to resolve it with the similar steps written in `demo-witness-script.sh` I'm getting 404 this Resolve command: `kli oobi resolve --name witness-test --oobi-alias wan --oobi <aid>/controller` Is there any additional steps needed to bring up a witness?
I’ll give this a try and see what happens for me. I’m learning, too. Is all you did found in the two scripts you mentioned?
rodolfo.miranda
starting witnesses as in `start-witness.sh` didn't work for me. As Phil mentioned, you need to add a configuration file as in `wan.json`. That didn't work for me either (see my previous posts) when passing the config file at `kli init`. However I was able to hack it in code if I pass config file at `kli witness start`
Hey Folks, sorry I'm late to the party here (we are in the final crush of vLEI go to production). There was a mismatch between the parameters and the config file for that script. I added a config file specific for that script and fixed the parameters so it now correctly starts a witness that exposes its endpoints via is controller and blind OOBI. All this is on the most recent development push from tonight
rodolfo.miranda
Thanks <@U024CJMG22J>! working like a charm now!
same here, it worked for me the first time.
What does “qb64” stand for in the code? Example: “fully qualified qb64" >
pris (subing.CryptSignerSuber): named sub DB whose keys are public key
>     from key pair and values are private keys from key pair
>     Key is public key (fully qualified qb64)
>     Value is private key (fully qualified qb64)
qualified base64
The qualified in that comment is redundantly duplicative
thanks
Are the `prod` and `bare` function pair in `eventing.py` the functions to be used for graduated disclosure of ACDCs?
Or can the `query` and `reply` functions be used to send information and for graduated disclosure?
The idea right now is to define higher level credential presentation and credential issuance protocol exchanges using the `exn` peer-to-peer messages. Right we have one message for credential presentation created in `presentationExchangeExn` in protocoling.py and for credential issuance in `credentialIssueExn` in the same file.
Got it. I see the *scripts/demo/credentials* folder with examples of issuance. To make a presentation script I imagine I only need to take the SAID from the credential issuance and do the following:
kli present --name mydb \
  --base ${MYDIR} \
  --alias mydid \
  --said <the SAID from issuance> \
  --include \
  --recipient <what do I do here?>
Given that, what would I need to do in order to make the recipient be a separate agent? I see the docs for that argument indicates it is an
> 
alias or qb64 AID
which makes me think the credentials can only presented to locally stored recipients. Is that correct or can the AID be something tied to another agent?
You would need to exchange OOBIs with the recipient
Got it, that’s what I was thinking. So does this mean OOBIs are stored by AID in Keeper/Hab?
I’ll answer that question for myself :slightly_smiling_face: I am having fun reading keripy. I wish I would have started this nine months ago
Can anyone tell me why `python:3.10.4-buster` version of Base image is used? () From the Image Scanning tools I found this result: (though many of them comes from host linux-kernel)
Vulnerabilities found:
  2 Critical 
 31 High 
196 Medium 
 64 Low 
297 Informational 
178 Undefined 
=================
768 Total 
But the alpine version (*`python:3.10.4-alpine3.14`*) gives no vulnerability.
I do not believe there is a specific reason for that base image. If you feel that the image you listed would be better, please open a PR
I might be way off base in this PR: I found what I thought were potential bugs in the lack of recognition of `headDirPath` from `--config-dir` as well as an incorrect use, based on comments I read, of the value for `base`. Even though the unit tests pass there is a test script failing, `basic/multisig.sh` that I’m using to guide me to see places where I have missed applying the same change. It looks like I haven’t updated the `multisig incept` CLI command yet to accept `--config-dir`. Does this look like I’m headed in the right direction Phil or Kevin? This all came about as I tried to unify the command line args and config file usage between the `rotate` and `incept` commands. I found out that the `config` variable in the `incept` command was a bit confusingly named so I was relabeling it when I noticed this other issue.
If the scripts are failing the change is not correct
The CI on GitHub runs the test_demo script
It is looking like the meaning of what `base` may have changed over time. In the `commands/incept.py` the doc string states >
additional optional prefix to file location of KERI keystore
and `hio/base/filing.py` constructs the path as so:
path = os.path.abspath(
                        os.path.expanduser(
                            os.path.join(headDirPath,
                                         tailDirPath,
                                         base,
                                         name)))
This led me to believe that the `base` attribute was originally intended to be used as an optional segment of the path like so:
`/headDirPath/tailDirPath/base/name`

Because of this code segment in `Filer` I assumed that both `headDirPath` and `base` must be specified together

However, the `existing.existingHby` context manager does not have a property for specifying the configuration directory, headDirPath, which leads me to believe base is being used with an assumption of using the default `HeadDirPath` or `AltHeadDirPath` from the `Keeper` class.
I was more asking to find out if there were any known bugs or missing parts with regard to how `base` or `config-dir` are being used. I will change the PR so that it doesn’t fail.
This change ended up being bigger than I planned though to be consistent I made the change in all KLI commands that use `base` and `config_dir`. Consistency was my main goal in this PR. If you want me to chop it up into multiple smaller PRs I’d be happy to. It begs the question of whether the argument should be called `config_dir` or `headDirPath` since the underlying `Filer` implementation uses `headDirPath`. It seems like having a common configuration approach for all of the CLI commands is useful so we can make assumptions across the whole CLI surface area. Please take a look: I know it’s big. I don’t usually submit PRs this big though it seemed necessary to have consistent quality across the CLI. It hits a high number of files though it’s really the same change in most files.
Once we have 413 wrapped up I can go back to 274.
rodolfo.miranda
I found all those file prefixes confusing and hence I always try to leave them empty. Moreover I think the config file can accepts a path. I'll take a look at the PR during the week.
I made some change in PR.
Looking back it is clear I misunderstood the purpose of the *config-dir* property by conflating it with *headDirPath*. I decided to finish it out so that I could show my thinking. My work in #413 is not necessary and would be better organized as a new `--head-dir`feature if it is ever needed in the future. I will cancel my PR.
rodolfo.miranda
submitted with a small fix on `kli interact` that do not query for witness receipts.
Mailboxes are new to me though what you wrote seems to make sense.
rodolfo.miranda
Regarding `Mailboxes` , I'd love to have a call dedicated to them. I couldn't find much documentation than digging into the code. We can use the call to document afterwards.
Do we have an example of any place where the following three threshold formats are received? • Base 10 numeric int value • Base 16 hex string • JSON array of strings notation (for weight threshold designation). I just need one example, or something to go off of, so I can go start work on this. This is for issue A related issue is . I expect the tests created for 273 will either begin or add to the REST API automated tests.
I see the `multisig-triple-sample.json` in the following command from the *multisig-triple.sh* demo script:
kli multisig incept --name multisig1 --alias multisig1 --group multisig --file ${KERI_DEMO_SCRIPT_DIR}/data/multisig-triple-sample.json &
Now I just need a nexample of the Base 16 hex string.
nvm, I just answered my own question. It’s just a hex conversion of a base 10 value, duh.
What is the purpose of receiving base 16 hex strings for the threshold when numeric int values are supported? Is there another feature that is intended to go along with the base 16 hex string threshold that is not covered by the numeric int threshold?
String values that represent ints are hex strings in KERIpy as its slightly more efficient. Its the same for the threshold of accountable duplicity or TOAD.
rodolfo.miranda
I noticed that yesterday while working on the backer and the sequence number `s` reached the number 10 :grinning:
rodolfo.miranda
Cardano Registar Backer submitted as WIP to show how the code is organized and discuss how is the best way to merge all or part of it into `keripy`
Excited to take a look!!
I am getting an error on the OOBI resolution step. Error:
 not found
Also, I had to restart the `start_backer.sh` script with the `rm -r ~/.keri` commented out along with the *FUNDING_ADDRESS_CBORHEX* set to my Cardano Address in order to get the script to detect my funds.
I think I see the problem. Getting a PR ready.
rodolfo.miranda
lastest commits make the funding address not required. Did you get the README?
Yeah. I just tried setting that variable since I was getting an error on the second script.
Getting error on line while running the test.
FAILED test_configing.py::test_configer::136 - AssertionError: assert False
*Findings:*
If the user has root permission `configing.Configer()` creates the file inside `/usr/local/var/keri/cf/main/conf.json`. And `configing.Configer(headDirPath="/root/keri")` creates the file inside `/root/keri/keri/cf/main/conf.json`.
But without the root permission, both `configing.Configer()` & `configing.Configer(headDirPath="/root/keri")` creates file inside `${HOME}/.keri/cf/main/conf.json`

 test works perfectly on github action & local environment(if doesn't have root permission). But doesn't work inside Docker(or environment having root permission) as the file created inside `/root/keri/keri/cf/main/conf.json`
This works perfectly on both Docker & Local environment (At least for me).
looks good to me
nice catch
Hi, I’m pretty new to KERI. I wonder if there is any example project/application which utilizes `keripy` as its key management?
I’m tinkering the KERI Command line Interface (KLI), and it’s pretty confusing.
might help you.
<@U04BRS1MUAH> gonna take a look at it. Thanks a lot.
rodolfo.miranda
<@U024CJMG22J> do you have the document you used at your last session at IIW regarding did:keri? I think you made changes to the doc on the fly, right?
rodolfo.miranda
I'm trying to create an X25519 key with `coring.Salter().signer(code=coring.MtrDex.X25519_Cipher_Seed, temp=True)` but I'm getting `ValueError: Unsupported signer code = P` Is it a problem on how I'm calling the function or it means is not implemented?
rodolfo.miranda
`MtrDex.Ed25519_Seed` works ok
I feel like you’re running into things that might not be implemented, the codes are there but the impl may not be
rodolfo.miranda
that's what I thought. Thanks
You need to create an Encrypter or Decryptor object
A signer doesn’t work for X25519, only encryption
rodolfo.miranda
Ohh. And can I add the created X25519 key as an extra key to the inception?
Not as part of the KEL or saved data no. We need to look at persisting other types of keys like we do endpoints so we can support DID Docs which is what I assume you are trying to accomplish?
rodolfo.miranda
playing with that :grinning:. I was trying to create a non-trannsferable AID with two keys in the `k` field, the Ed25519 that we use, and a second of type X25519
rodolfo.miranda
I'm picked that one because it's the one supported in the DIDComm library I used.
Those keys would not belong in the K field
They are more akin to the service endpoint data that we store using BADA RUN semantics
So you could query for them in straight KERI or they can be retrieved and transformed into keys in a did doc like we do with service endpoints in the current reference implementation
rodolfo.miranda
where you store the privates of those keys?
If it’s derived from an ed25519, the privates are the same
Which is the only way to do it in keripy right now
So they all end up in the encrypted keystore
rodolfo.miranda
ok. I'm just doing some PoCs to play and understand better. I'll check Encrypter and Decripter object
The trick would be to use Manager directly with a stem that separates the keys from any other signing keys.
rodolfo.miranda
ok, I was using the eventing.incept directly to keep it basic, but that's because I'm learning while coding.
The other choice, since you get the X keys for free would be to derive them in the did method resolver from the signing keys and put those in the did doc.
Kind of cheap but for a PoC, shrug
rodolfo.miranda
that's what I was planning to do :grinning:
rodolfo.miranda
cheap hacks
Look inside the Manager. That’s where Encrypter is used, generated from Ed keys
For keystore encryption
rodolfo.miranda
I made a simple proof of concept script that encrypt and decrypt a DIDComm message using a non transferable AID. It shows how DIDComm libraries can work with keri, but still does not solve how to resolve the OOBI (or resolve the kel and service endpoint). That part is a hack in the code, but I think it's a starting point to understand what are the minimum required features for a keri lite to replace did:peer.
rodolfo.miranda
I'd like to show this PoC code to you <@U024CJMG22J> and <@U03RLLP2CR5> and see if somehow fits on the needs for Hyperledger Aries. We can have a brief call, or just wait until next week keri call.
Interested to listen in, if you don’t mind
rodolfo.miranda
of course, that's for all!
Yes, I’d like to see what you’ve done and talk through other issues regarding did:peer as well. I’m fairly wide open the rest of the week except Friday morning (PST)
rodolfo.miranda
I'm available tomorrow morning and gaps on the afternoon (EST) and Friday morning (EST)
I look forward to seeing what you’ve got. :-)
rodolfo.miranda
<@U024CJMG22J>, do you think it's ok to show and talk about didcomm PoC at tomorrow's keri meeting?
Yes, I think that would be a great idea.
rodolfo.miranda
good. <@U03RLLP2CR5>, are you available tomorrow to join keri's call for a bit? I'd like to know your opinion.
daniel.hardman
yes, I will be there
Curious about the `swagger` on the KERIpy agent After I ran the command
kli agent demo --config-file demo-witness-oobis-schema
It showed that
******* Starting Multisig Delegation Agents on ports 5623, 5723, 5823, 5923 .******
However,
1. I cannot access any of the swagger html page on any of `` or other ports
2. What is `demo-witness-oobis-schema` file? Is it stored in the OOBI database as well?
To access swagger: Terminal 1: `kli witness demo` Terminal 2: • Move everything from `static/swaggerui` to `src/keri/app/cli/commands/agent/` (Idk why this is case :3, but I needed to do that) • Run `kli agent demo --config-file ./scripts/keri/cf/demo-witness-oobis.json`
thanks <@U04BRS1MUAH> will try that
petteri.stenius
I think you need to start `kli agent demo` from the keripy main folder, like this `cd ~/keripy` `kli agent demo`
<@U03U37DM125> <@U04BRS1MUAH> I tried running `kli agent demo` on my Windows pc on Ubuntu WSL2, and the swaggerUI works fine. The issue found yesterday was on my Macbook - will investigate later.
same
What is the way to verify an ACDC created with KERI? Is it `kli verify`? I see the `vc issue` and `vc present` commands for `kli vc …` and am looking for a way to verify the ACDCs created with `kli vc issue`.
Off the top I'd guess you are simply missing the `vc` from the `verify` command if it follows suit on the syntax of the others. But I don't know the `kli` command and its usage. Maybe you are asking a more nuanced question.
I can look at the code if you want. What repo does `kli` come from? `KERIpy`?
leads here:
and here's your problem
I don't see verify at all.
You can use 't' in github to search for filenames
So here is the command you referenced, you were right:
Where is your KEL being stored?
I'd guess if you're having issues verifying you are perhaps not presenting the KEL(s)
But I am very new
To answer your question, to me it does look like `kli verify` is what you want.
But you may need to break the data down to use it
There probably should be a `vc verify`
Really if you're talking about issuing a VC to another party and having them verify there is an additional component you need that provides resolution of external KELs, or you need to staple any KELs you need to the presentation.
I think.
Someone please correct me if I misunderstand.
petteri.stenius
There's a project called sally that I think does what you want I have gotten it to run and had it verify a single-sig QVI credential
`kli vc list` from the recipient of the credential will receive any issued credentials and verify them, presenting the results.
Nice
If you look at the script `scripts/demo/vLEI/issue-xbrl-attestation.sh` you'll see a full workflow of credential issuance with chaining for 5 credentials across 4 or 5 participants.
Thanks Phil!
petteri.stenius
<@U024CJMG22J> i understand `vc issue` and `vc list` work together. For `vc present` is there any other recipient than sally that could verify the presentation?
As you pointed out <@U03U37DM125>, only the credential verification service currently called Sally that was purpose built for GLEIF Reporting API
petteri.stenius
Ok thanks! One more question, what about `kli vc export`?
Currently the export tool does not check mailboxes for issuances that have not been processed but it certainly could (should?). All export does currently is to output either to console or file a stream of a credential and all supporting KELs, TELs, chained credentials and cryptographic primitives that anyone could use to verify that credential.
Is this line from the `test_signinput` in Sally an example of how to verify with KERI?
assert hab.kever.verfers[0].verify(sig=raw, ser=ser) is True
Thanks for checking into that Jason. I came to similar observations and have been looking to find, or write, a basic credential verifier.
ok, `keri.vdr.verifying.processCredential` is what I was looking for.
And thanks Petteri, I didn’t know what Sally was before now.
So, if I understand this correctly, the `MailboxDirector` has an instance of the verifier it uses to process each incoming message and the messages can include credentials, as in the following code snippet:
mbd = indirecting.MailboxDirector(
  hby=hby, 
  exc=exc,
  kvy=kvy,
  tvy=tvy,
  rvy=rvy,
  verifier=verifier,
  rep=rep,
  topics=["/receipt", "/replay", "/multisig", "/credential", "/delegate", "/challenge"])
Is that right?
So, a simple credential verification service would be just a KERI agent that responds to the `kli vc present` input with a success or a failure response.
Why would `kli interact` hang? Waiting for witness receipts?
looks like it:
witDoer = agenting.WitnessReceiptor(hby=self.hby)
        self.extend(doers=[witDoer])

        if hab.kever.wits:
            witDoer.msgs.append(dict(pre=hab.pre))
            while not witDoer.cues:
                _ = yield self.tock
rodolfo.miranda
Conceptual question: in demo script multisig-delegate.sh, should the `delegator` have been called `delegate`` instead?
rodolfo.miranda
`scripts/demo/vLEI/issue-xbrl-attestation.sh` <--- great script !!
No, in that script, the AID with alias delegator is the delegator for the multisig AID. If you look at the config file that is used to create the AID with alias multisig, it specifies the AID for delegator as `delpre` designating it as the delegator
I’d like to write a blog post tutorial and video on how to do what would be `kli vc verify` so it is reasonably clear to anyone trying to use KERI how to use it for issuing and verifying credentials. Here’s the high level overview from what I understand. Please correct me where I am wrong and add any missing parts. I know Petteri mentioned and it’s only a few hundred lines of code. I haven’t looked into it deeply yet. From what I understand there are a few things involved: 1. Prefix creation on either side, both controllers A and B 2. pairwise OOBI exchange between controllers A and B 3. VC registry creation for controller A 4. ACDC schema creation by controller A 5. ACDC schema registration by controller A 6. ACDC issuance by controller A 7. ACDC transmission to controller B, which likely triggers a VC registry creation in the Hab if it doesn’t exist yet. 8. ACDC schema discovery by controller B 9. ACDC reception by controller B verifies the credential. 10. ACDC schema validation by controller B 11. ACDC signature verification by controller B I haven’t looked deeply into the vLEI codebase yet though from what I can tell it is used for some sort of schema registration and discovery. And if the features don’t exist to do this yet then please post the Github Issues for them so I can know where to focus my efforts. I’m ready to pitch in to build whatever is missing.
And I received confirmation that the `MailboxDirector` class will be helpful: > has an instance of the verifier it uses to process each incoming message and the messages can include credentials, as in the following code snippet: >
mbd = indirecting.MailboxDirector(
>   hby=hby, 
>   exc=exc,
>   kvy=kvy,
>   tvy=tvy,
>   rvy=rvy,
>   verifier=verifier,
>   rep=rep,
>   topics=["/receipt", "/replay", "/multisig", "/credential", "/delegate", "/challenge"])
And a key point from my earlier question: > So, a simple credential verification service would be just a KERI agent that responds to the `kli vc present` input with a success or a failure response.
Thanks Phil for adding the confirmation to my earlier question. I hope to wrap my mind around this soon.
petteri.stenius
I forgot to mention there's also . I believe sally is an extension of kara. Main difference is sally validates vLEI chained credentials. The flow you describe is pretty much what happens when using sally (or kara).
I'm pretty new to KERI and KERi Python. I have been playing with it for a month and wonder is there any official place to ask or make suggestion about the KERIpy?
Thanks Petteri, that is very helpful.
You’ve found it. You can also open issues on the keripy repository or any of the other WebOfTrust repositories.
Will there be an option for the KERI codebase to use hexadecimal `0x.....` instead of `Base64`?
I don’t believe so. Base64 is part of the CESR spec
Since HIO supports “flow based programming that sees all components as asynchronous and linked by asynchronous buffers” would it be correct to say HIO is an async runtime? I’m trying to understand HIO so I can map it’s concurrency model to things I’m already familiar with as much as possible. I want to implement a really simple version of Raft I’ve already done with regular Python threads and sockets in HIO to teach me HIO. Are either the `doing.DoDoer` or `doing.Doer` analogues to `asyncio.create_task()` or is there a better analogy? And a related question, are there any good visuals or diagrams of how HIO works in the IoFlo manuals or otherwise?
As an example I’d like to understand what performs the execution of the following task:
def list_credentials(args):
    """ Command line list credential registries handler

    """
    ld = ListDoer(name=args.name,
                  alias=args.alias,
                  base=args.base,
                  bran=args.bran,
                  verbose=args.verbose,
                  poll=args.poll,
                  said=args.said,
                  issued=args.issued,
                  schema=args.schema)
    return [ld]
Is the ListDoer invoked on instantiation or by something that executes all doers in the returned list?
There is a lot to learn here. I'll give you a quick overview of what I've learned working with Hio through KERIpy.
Doers are coroutines that will yield execution to other coroutines throughout Hio. In particular, while waiting for I/O. You create them by either extending the Doer class from hio or by creating a "generator method" (one that `yields`) in python and "doifying" it with `doing.doify`
A good example of this is all the methods in KERIpy named something like `escrowDo`.
The `H` in Hio stands for "hierarchical" and so with Hio you can create a hierarchy of coroutines that manage each other and have dependencies. The main class for creating a hierarchy is the `DoDoer` which is initialized with a list of dependent `Doers` . A great example of this in KERIpy is any of the Endpoint classes in `kiwiing` that extend DoDoer, launch with something like a Postman and one of its own methods that needs to run as a coroutine, but also creates and manages new coroutines during its lifecycle. For example, some will create `WitnessReceipter` instances that run as coroutines. The Endpoint will create it, add it as a running coroutine with `self.extend` , let it perform its task, then clean up by removing it with `self.remove` Lots of examples in kwiwing of this behavior.
Finally, to start a tree of coroutines (Doers and DoDoers) you need a `Doist`. That is the top level that runs an entire hierarchy. Here is the `runController` method from KERIpy that is a helper for starting an entire hierarchy:
def runController(doers, expire=0.0):
    """
    Utiitity Function to create doist to run doers
    """
    tock = 0.03125
    doist = doing.Doist(limit=expire, tock=tock, real=True)
    (doers=doers)
That will run forever or until there are no more coroutines running, then it will exit.
One of the challenges of the `kli` is to run and wait for coroutines to finish and clean up after they are done (with tons of asynchronous tasks going on in the background) and exit cleanly. Most commands want to have a quick clean exit. Others run forever (`kli start witness`).
If you look at `kli.py` you'll see that it treats all the subcommands as Doer factories which it loads and then calls `runController` allowing the program to run until all the Doers it was given are done.
A common pattern in the kli command set is to have one top level class that is the `DoDoer` that starts all dependencies, run their coroutines, then have one method that is itself a coroutine that does all the work (including creating and deleting new coroutines), then cleans up the dependency coroutines with `self.remove` and exits leaving the top level Doist with nothing to do so it exits.
That's about all I have for now. There are hundreds of examples throughout KERIpy and hopefully this gives you enough of a head start to use them to learn more.
wow, thank you so much Phil. It will take me a bit of time to digest this.
Got it, I see, now, this makes so much sense. The
    try:
        doers = args.handler(args)
        directing.runController(doers=doers, expire=0.0)
`kli` main runs all of the commands created by the handler factories.
This is enough to get me started.
Do we have a KERI postman collection exercising the KERIpy agent routes?
If not I will make one.
rodolfo.miranda
there's the swagger UI
I’ll check that out, thanks.
So if you launch an agent with admin port of 5630, you can access the SwaggerUI at:
thanks
rodolfo.miranda
Is there any documentation regarding the Mailbox? I'd like to understand how messages are structured, specially EXN. I notice that there is a classification based on topics like /receipt / credential / challenge etc. Are that parto of KERI spec or the agent implementation?
I appologize if I have missed something obvious but I ran
kli agent start -a 5630
and then tried to navigate to  but am getting a 400 responce.
The best documentation on the mailbox is keripy, I think there might be a hackmd where we talked about exns before implementing them. I’ll see if I can dig it up. We have pending work to extract the REST “agent” API code into its own repo away from keri core (but still dependent on), the mailbox would be part of that too. They are not part of keri core no, but do utilize the same approach to structuring events, compact labels etc.
A question related to the ongoing `cesride`/`parside` efforts: as those mature, are there any plans to replace the pure-Python implementation of CESR handling in `keripy` by migrating to those via an FFI binding?
Yes.
What directory where you running that from?
<@U04NF2VP5GA> Try running `kli agent vlei` and then going to ``
That would be the definition of success for cesride.
This message was deleted.
No there are not. But we are working on a TypeScript partial implementation of CESR primitives and other aspects of KERI to create a minimum client for "signing at the edge". The effort is called Signify. There is a repo of TypeScript code at:
There are channels for discussion at <#C04G1KR5R6D|signify> and <#C04NGM6FJ73|signify-dev>
I am just about to push a new PR to signify-ts with an actual working module some time this evening.
This message was deleted.
Yes, there are dozens of scripts in the scripts/demo directory
<@U03EUG009MY> I get a swagger doc for the vlei agent <@U024CJMG22J> I was running the cli from the root of the repository as well as the scripts dir.
Why would an establishment only `estOnly` KEL use a rotation event to anchor the registry inception seal? Lines 400-404 of `src/keri/vdr/credentialing.py`
        if not hab.group:
            if estOnly:
                hab.rotate(data=[rseal])
            else:
                hab.interact(data=[rseal])
What is the thinking behind when a registry inception event is also an establishment event?
Why would it be different?
Is it that for establishment only KELs that registry inception is allowed in that registry creation can be seen as a type of establishment event, the establishment of a credential?
And, a follow up question, which was my original question: Are interaction events the mechanism by which a credential’s TEL is connected to a prefix’s KEL? It seems I have answered my own question in the affirmative, though I want to make sure I’m not missing anything.
You are looking at it backwards. When you declare an AID to establishment only, you are only allowed inception and rotation events. Therefore the ONLY way you can anchor anything into your KEL is with a rotation event because interaction events are not allowed. This provides an added layer of security because your keys all become one time use keys for key events. Once used, they have to be rotated because you are only allowed rotation events.
The anchoring is not "connecting" the TEL but providing a cryptographic commitment to the TEL. You are creating an event in your own KEL with the SAID of the TEL event and signing the event in your KEL, thus also signing the SAID of the TEL event, providing the commitment. This applies whether the anchor is an interaction event or a rotation event.
I see, so both a rotation event and an interaction event may be used to anchor cryptographic commitments in a KEL. So a registry inception event is not an establishment event. An establishment event, rotation event, can anchor a commitment to the TEL for the registry inception event. When a given AID is establishment only this provides an additional layer of security by requiring a new key for each event, meaning a specific key used to anchor a TEL commitment will only be used once in an establishment only KEL. Thank you for clearing this up.
I am trying to set up a KERI witness node on the Amazon AWS, which is a Ubuntu 22.04 with KERIpy installed. On *production grade*, is it possible to set up a witness node by using the `kli` command as shown as in `start-witness.sh` ?
Moreover, I'm curious about the first line of the `kli init` command
kli init --name witness --nopasscode --config-dir ${KERI_SCRIPT_DIR} --config-file witness
There is an argument `--config-file witness` but I cannot fathom which configuration file `witness`  is it referring to?
`witness` is referring to this file: ``
For production -Don't use --nopasscode -Configure port number of witness from the config file.
<@U04BRS1MUAH> Thank you !!!
In the same machine, is it possible to start multiple `witness` in the same terminal?
for i in `seq 0 1 4`
do
    echo "Start witness_${i}"
    kli witness start --name witness_${i} --alias witness_${i}_alias &
done
And I have set each witness configuration file to use different port (by using different configuration file)
I tried to follow the idea of `kli witness demo` , but I encountered with the successfully set up the first witness, but the other failed
Witness witness_0 : BH1Cw8JeU4Aa1BR2SVzNlGJWOHNkSW1aFyOxivIlO7EK
ERR: 'NoneType' object has no attribute 'accept' # witness_1
ERR: 'NoneType' object has no attribute 'accept' # witness_2
ERR: 'NoneType' object has no attribute 'accept' # witness_3
ERR: 'NoneType' object has no attribute 'accept' # witness_4
You should run each `witness` in separate terminal. Running them in single terminal generates the error (In my case)
I created an issue for that: Temporary solution: I needed to restart my docker.
Got a good laugh when I found this:
def scoobiDo(self, tymth=None, tock=0.0):
LOL, I've been waiting for someone to find that.
hahah, only in KERI land
petteri.stenius
You can start multiple withess instances from a single shell. On a single host, each witness instance must have a different name with `--name` parameter. They must also listen on different ports with `-H` and `-T` parameters. For example I have used a script like this, to start the exact same witness `kli witness demo` starts. Put the following into wan.sh. Repeat with wil.sh, wes.sh etc, changing name, salt, tcp and http parameters in each.
#!/bin/bash
cd ~/keripy/scripts || exit 1
. ./demo/demo-scripts.sh || exit 1
name=wan
salt=0AB3YW5uLXRoZS13aXRuZXNz
tcp=5632
http=5642
kli init --name $name --nopasscode --salt $salt
kli incept --name $name --alias=$name --config ${KERI_SCRIPT_DIR} --file ${KERI_DEMO_SCRIPT_DIR}/data/wil-witness-sample.json
exec kli witness start --name $name --alias=$name -T $tcp -H $http
Then I start all witnesses with the following script. Put this into `witness-demo.sh`

#!/bin/bash

function stop-all {
    kill $(jobs -p)
}

trap stop-all EXIT

bash wan.sh &
bash wil.sh &
bash wes.sh &

wait $(jobs -p)
To start all witnesses run `bash witness-demo.sh`
<@U03U37DM125> Thank you for this super detail script. Maybe mine is not working because, even though I put the HTTP and TCP ports in the `json` configuration file (for the inception), but I *haven't* added the ports as arguments `-T $tcp -H $http` in `kli witness start` command Will try out tmr and provide the feedback to you <@U03U37DM125> <@U04BRS1MUAH>
I want to understand OOBI resolution. My questions below are based on what I am reading in the `issue-xbrl-attestation.sh`. I am trying to understand what is 1. Does `kli oobi resolve` tell the database/keystore/Habery instance identified by `--name` how to resolve the OOBI when queried by CID/AID Prefix? And is the EID used to validate the connection? 2. Is the CID/AID Prefix used elsewhere in KERIpy as a key in the OOBI database to look up resolved OOBIs or is only the original URL used? If only the original URL then how is a prefix found in a KERI event or ACDC transformed to an OOBI URL to facilitate lookups?
kli oobi resolve --name qvi --oobi-alias external \
  --oobi 

OOBI_RE = re.compile('\\A/oobi/(?P<cid>[^/]+)/(?P<role>[^/]+)(?:/(?P<eid>[^/]+))?\\Z', re.IGNORECASE)
These questions are really part of my attempt to understand what OOBI resolution does. The question below is a clearer ask than the low level details I asked about above.

Since
• *wan* is `BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha` (`…f2ha`) and 
• *external* is `EHOuGiHMxJShXHgSb6k_9pqxmRb8H-LT0R2hQouHp8pW` (`…p8pW`)
Then is this OOBI resolution command telling the `qvi` database about the *external* AID `...p8pW` and to use *wan* `…f2ha` as a witness to check *external*’s KEL for key authority? And a follow up, on every OOBI resolution is a verification of key state completed to verify that the AID in the OOBI is authentic?
joseph.l.hunsaker
Oh boy :man-facepalming:
A separate question: What purpose does credential caching serve in the vLEI-server? The start command from the Sally readme is as follows (same as for issue-xbrl-attestation.sh script) `vLEI-server -s ./schema/acdc -c ./samples/acdc/ -o ./samples/oobis/` Yet if I use some other directory like `-c ./tempdir` for the ACDC directory then the issue-xbrl-attestation.sh script completes just fine. This leads me to believe that script does not leverage any credential caching capability of the vLEI-server. So then, what purpose does the credential caching functionality serve and how would I exercise it?
I see the
class SchemaEnd:
    ...

    def on_get(self, _, rep, said):
        if said in self.schemaCache:
            data = self.schemaCache[said]

            rep.status = falcon.HTTP_200
            rep.content_type = "application/schema+json"
            rep.data = data
            return

        if said in self.credentialCache:
            data = self.credentialCache[said]

            rep.status = falcon.HTTP_200
            rep.content_type = "application/acdc+json"
            rep.data = data.encode("utf-8")
            return
That return the `-acdc.cesr` files and the other code that shows ACDCs can be resolved by OOBI, I presume a data OOBI.
This leads me to the next question, what creates the contents of the `-acdc.cesr` files? Is that some export of a credential from a registry with `kli vc export`?
And is the purpose of this cache just to speed up the workflows that rely on long-lived credentials?
For a visual does the following seem like an accurate representation of the OOBI resolution? Would you emphasize different specifics than I have here and does it seem too much info? The goal is to communicate how OOBIs work with the `issue-xbrl-attestation.sh` script. Each arrow is an OOBI. There are pairwise OOBIs for each controller and then data OOBIs for the iXBRLDataAttestation schema.
controllers-and-oobis.png
Yes and yes
Since a witness is cryptographically committed to in the KEL, that verification for a witness OOBI is automatic. The OOBI only serves as an out of band discovery mechanism. All verification occurs on the KEL that is retrieved from the resolution of the OOBI.
That was used for testing and proof of concept to serve up root credentials. For the most part the server for vLEI is just for testing purposes
Yeah, this diagram is overwhelming. Way too much to make sense of it.
Got it, thanks.
Thanks again.
Ok, I’ll simplify. Thank you for your comment.
Two questions on writing ACDC schemas: 1. How do I compute the `$id` property at the root level for a schema as below in the *qualified-vLEI-issuer-vLEI-credential.json* file
{
  "$id": "EBfdlu8R27Fbx-ehrqwImnK-8Cm79sqbAQ4MmvEAYqao",
  "$schema": "",
  ...
}
2. How do I compute the `$id` property in the `"a"` and `"r"` sections later on in the file:
{
  ...
  "a": {
      "oneOf": [
        {
          "description": "Attributes block SAID",
          "type": "string"
        },
        {
          "$id": "ELGgI0fkloqKWREXgqUfgS0bJybP1LChxCO3sqPSFHCj",
          "description": "Attributes block",
  ...
  "r": {
      "oneOf": [
        {
          "description": "Rules block SAID",
          "type": "string"
        },
        {
          "$id": "ECllqarpkZrSIWCb97XlMpEZZH3q4kc--FQ9mbkFMb_5",
          "description": "Rules block",
}
Once I know this I may have everything I need to write schemas other than a knowledge of how edges work, though the `issue-xbrl-attestation.sh` script gives me an idea.
The $id is the said of the schema The $id in the a block is the said of the schema of the a block The $id in the r block is the said of the schema in the r block. I feel like these questions could be answered by looking at the generate code within the same repo, it’s not magic.
The generate code is (sadly) extremely explicit about what it is doing.
The generate code? Do you mean `kli saidify --file <somefile>`?
That’s what I thought, that it would be the SAID of all of those blocks, I was more unsure of what particular parts of the schema the SAID is computed from. So does this mean that the SAID of the overall schema can only be computed once the SAIDs of the a and r blocks are computed and added to those blocks, effectively making the schema SAID a function (partially) of the SAIDs for the a and r blocks?
I appreciate you answering my questions. I am looking to write a utility to compute the SAID of a schema and the other blocks for me so I don’t have to do everything by hand.
So I want to be sure I have the correct idea of things.
I see the `-l` option for `kli saidify`. So, would the following process be complete to fully SAIDify a schema? 1. `kli saidify --file a_section.json --label $id` 2. `kli saidify --file r_section.json --label $id` 3. Put *a_section.json* and *r_section.json* in their appropriate envelope in the overall *my_acdc_schema.json* 4. *`kli saidify --file my_acdc_schema.json --label $id`*
There is a pr in web of trust vLEI that resurrects some old code to do generic top level said creation of a schema, the code to generate vLEIs schema is extremely explicit, but works. I look forward to a pr that parses all schema as a first pass, then back fills dependent schema on subsequent passes be a fun exercise
I was referring to the generate code in web of trust /vLEI
Sorry for the confusion
Got it, I will give that a shot this week.
Looks like Arshdeep may have already finished it: I will review it and see if anything needs to be added.
I know I don’t understand append to extend yet though I suspect that SAIDifying an appendage to a credential will be similar to SAIDifying the original credential, would it not?
If that question doesn’t make sense I can re-ask it at the meeting. I’m a bit tired tonight and should go to bed. Circadian rhythms are powerful…
Totally a morning person.
Ha, append to extend in is a ToDo
Once I understand append to extend I’d be happy to submit a PR subsequent to Arshdeep’s PR for a schema extension.
If you read my comment on that pr it explains why I consider it insufficient, there are also many examples of single nested computations of SAIDs in keripy. Imo to truly do complex schema SAID generation would likely require a DSL to express relationships of any given ecosystem and the dependencies therein
Or do something brute force and extremely inelegant as I did with vLEI
I am okay with brute force for my initial understanding. That’s enough for me to write a guide on my blog. Once I have done it a few times by hand then I’ll be in a better position to make an intelligent contribution in a PR.
Thanks for the pointer! That generate.py file is what I needed, and the PR you mentioned.
A DSL makes sense. I’m really glad I took a compilers course in January because now I have at least a rudimentary understanding of how I might approach the task of writing a DSL. I would tremendously enjoy the challenge of writing an ACDC schema DSL! Perfect way to cement my learning.
I am confused between the `Witness` and the `endpoint for witness` for the KERI `inception` event. I have set up 5 witness already. Then, I tried to incept a local AID (says Alice's AID) I did 2 steps. The first step is to initialize the Key Store `ks` using the `kli init` command, and I passed the `--config-file` during the `init` event as shown below (to initialize the key store to know about my 5 witnesses)
{
  "dt": "2023-02-26T12:57:59.823350+00:00",
  "iurls": [
    "",
    "",
    "",
    "",
    ""
  ]
}
I can load all 5 OOBIs for Alice to incept her local AID in the next step
===========================================================
However, for this `incept` step, when I ran the `kli incept` command and pass the `--file` as shown below
{
  "transferable": true,
  "wits": [
    "BC7qjybOGR7-UPVp-R-gqPmeeFY2eI6FYr-OE2OI8iMX",
    "BLQOLhl3SAhuBEWbNT3CzyU2u4gesbpKx70Tzvkrk-1Y",
    "BAHVSvATS6N8fQfV-UfJB_xZXzRuEL5mX7cWKdxNKVzK",
    "BBmSeiRdPwwH3Ph5VSPJXFa1ub2zfYiR1XS7LUqGyJ_g",
    "BMF2DG6OJ-En0Rb2UqSfZ0g5VtruxhIIrOE-XNcBprpR"
  ],
  "toad": 5,
  "icount": 1,
  "ncount": 1,
  "isith": "1",
  "nsith": "1"
}
It shows an error that it cannot receive any receipts from any of 5 witnesses
Waiting for witness receipts...
`ERR: unable to find a valid endpoint for witness BC7qjybOGR7-UPVp-R-gqPmeeFY2eI6FYr-OE2OI8iMX`

Alias:  Alice Alicia
Identifier: EOBA19qSkKfflSMXj28FlV8i5JwTkYSUX4vZjeLSsMLV
Seq No: 0

Witnesses:
Count:          5
Receipts:       0
Threshold:      5

Public Keys:
        1. DD5K6t_-cV2cijPpXAU6Ajs_GI6tYVmtbhDE35eX08BL
So the received Receipts are at `0`
*Did I miss anything?* 
So it works, just pass in the `-H` and `-T` as arguments in the `kli witness start` command
What is the output of your `kli init` command?
KERI Keystore created at: /home/ubuntu/.keri/ks/QAR-ALICE
KERI Database created at: /home/ubuntu/.keri/db/QAR-ALICE
KERI Credential Store created at: /home/ubuntu/.keri/reg/QAR-ALICE
        aeid: BN7nMtdIVbhhY8c6Vslof3AhvaVT1KQY2scX_XLZsW9t

Loading 5 OOBIs...
 succeeded
 succeeded
 succeeded
 succeeded
 succeeded
Since I'm new to KERI, I'm not sure if there are any difference between witness and witness endpoint? I notice that the OOBI URLs should end with `/controller` e.g. `http://......./oobi/........../controller`
yeah, your `iurls` look like data OOBIs rather than witness OOBIs.
Like you mentioned, witness oobis generally look like this: `""` or <|http://<ip_address>>:<port>)/oobi/<AID>/<role> with role=controller.
Data OOBIs, or the `durls` attribute in the init config file, look like the following: `` or http://<IP>:<port>/oobi/<AID>
<@U03EUG009MY> OMG, it seems I have not learnt anything about `witness OOBI` and `data OOBI` So when I set up a witness node by using the `kli witness start` command, what are the steps required prior to that to be able to retrieve the witness OOBIs (instead of just the plain data of that OOBI)
In addition, is there a document describing the `witness OOBI` and `data OOBI` ?
Some of the descriptions here are inaccurate. The OOBIs defined in the configuration under the key `iurls` are OOBIs for resolving key state. They are not only witness OOBIs. In addition, the OOBIs listed in the samples are also not specific to witnesses. They are "controller" OOBIs because the are designating a resolution point for the controller (thus the role of 'controller' in the URL) key state and endpoints of the AID also listed in the URL. That same type of URL will work for any AID, not just witnesses.
I'm not going to discuss data OOBIs in this thread because I think it will muddy the water and not help solve the original question.
So for the KERIpy samples, the controller OOBIs listed in the config files provide endpoint and key state resolution for the witnesses themselves.
So that when you want to use a one of the resolved AIDs as a witness, your instance will know how to contact the witness.
The problem you are encountering has one of two possible causes. First, the lack of the role on the URL ('controller') could very well be the problem.
Second, if the witnesses are not launched correctly with config files that tell them what endpoints to expose, they will not reply with any endpoint information in response to the OOBI.
That data is found in block named after the alias for the AID of the witness in the `curls` block. See here for the "wan" witness:
{
  "dt": "2022-01-20T12:57:59.823350+00:00",
  "wan": {
    "dt": "2022-01-20T12:57:59.823350+00:00",
    "curls": ["", ""]
  },
  "iurls": [
  ]
}
This config file for the witness launched with the name of "wan" is being told to generate endpoint responses for 2 endpoints, one HTTP and one TCP.
So you need to ensure each witness has the correct configuration file for its ports and IP addresses and that they line up correctly with the ports and IP addresses the witnesses are listening on.
<@U024CJMG22J> Thank you for the super clarify responses. I think the problem is likely the lack of the `controller` role since I ~did~ do not have any clue how to set it with the witness in the first place.
Thank you <@U024CJMG22J> for clearing this up. This was very useful for me. > In addition, the OOBIs listed in the samples are also not specific to witnesses. I assumed incorrectly that the `iurls` was for witness OOBIs since I noticed the AIDs were of wan, wil, and wes
<@U04GUPCB1M4>, could you please start an issue in an appropriate WOT repo and _copy-paste the results of this thread_ in there? I think this is valuable information.
<@U02PA6UQ6BV> Sure, will do that in a few hours after my dinner.
<@U02PA6UQ6BV> Since I'm still solving this issue, should I start the issue on the github repo now or later (after I resolve it?)
That a matter of taste. I’d say: copy this thread so far in an issue, paste the link to it here and continue on github. But again: your choice
Henk, you are thinking the same thing as me! I copied many of Phil’s remarks into my notes on OOBIs. This was very valuable.
Hi all, I'm working through the KERI specs (Whitepaper, IETF Drafts, etc) and the code. I understand the CESR well by now and able to process GLEIF responses ( OOBI KELs, etc) programmatically. I'd like to understand the flow of events to and between witnesses. I've already spent some time reading the Keripy's code and have a general idea. Is there a better source to understand the "protocol" ? Such as sequence of messages (i.e. various Ilks: IXNs KSNs, QRYs,...), receipts, etc?
Also. When Direct (tcp?) vs Indirect( http + SSE/mailbox?) are used between and to the witnesses/watchers?
Hello Vasily, the code is typically the best source if you’ve already read and absorbed all of the IETF Draft specs and the whitepaper. For your specific question on the flow of events to and between witnesses I believe the whitepaper is the most up to date resource. The has a set of messages in sections 7 through 12 though there isn’t a lot of prose explaining the protocol. Communication is between the controller and the witnesses right now for KELs. There is no witness to witness communication at the moment though I believe this is planned. KA2CE from the Whitepaper shows one plan and I believe the gossip protocol is intended to be used for witness to witness communication. As a side note on the message protocol for ACDCs there is the Issuance and spec with section describing an exchange of ACDC messages.
Regarding direct vs. indirect, with my current understanding of the code, currently only indirect mode is used in KERIpy, to my knowledge. Direct mode could be accomplished by sending the KEL events between two controllers.
Is there a way to set a witness controller using the `kli witness` command?
No, not currently. Witnesses are started in promiscuous mode, accepting and receipting events from any AID that designates them as a witness.
Thank you, Kent! Somehow I was under the impression that Witnesses gossip between themselves. As you pointed out it is mentioned in the whitepaper. The `inderecting.py` includes logic for both the witnesses and controllers, which I didn't pay enough attention to. Getting used to the naming conventions is a challenge :slightly_smiling_face: I've read the IETF Draft spec ( and also outdated KIDs) , unfortunately it's not very helpful, e.g. calls interaction event `isn` , which I believe ( looking at coring.py # 59) is actually `ixn` or `prd`, which is `pro` ? . A typo? The bigger challenge, however, is that not all messages are explained beyond the samples ( some are described in the whitepaper ). Some labels are not covered either , e.g. `r` - it looks like there is just a convention on how to interpret these paths (e.g. "log/processor" or is it "logs/processor"). Maybe a simple question to start off. How does a controller acquire signatures from all the backers (i.e witnesses) for the `icp` event and then how do these signatures get to the KELs of the witnesses? Is it simply: controller sends the message to each designated witness, collects the receipt, which comes with a signature. When all receipts are collected it then sends the complete event (with all the backer signatures) to the witnesses?
Yes, I believe that is a typo. If you want one of your first PRs to the KERI space I’ll leave it for you otherwise I will make a PR. I do believe that is a typo and should be `ixn` in the spec. I have an index I’ve been building of what all the field names mean. Most of them are explained in the specs. Are you referring to the `"r"` label in a KERI event message or in an ACDC TEL message? `r` is a rules section in an ACDC, the section where you set up Ricardian Contracts, if any.
`r` in Key events I believe means “route” though I’ll double check. I asked that same question.
I think `rr` stands for return route. I would double check with Phil though. These are Query and Reply KEL message types.
Here is a HackMD on that was the source document for
If you want a walkthrough of the class names in KERIpy Phil or Kevin are great resources, though I know they are very busy. I would be happy to provide answers to any questions you have.
rodolfo.miranda
Is there a message defined so a witness can tell an agent to not expect signed receipts for the key event submitted? I'm thinking on a paid witness service that won't process receipts if the account suspended for example.
rodolfo.miranda
or due to maintainance, or because has reach a full capacity...
what is the difference between Base64 URL safe and qualified base64 (qb64)?
rodolfo.miranda
I think that the qualified means that it follows the CESR serialization of removing the pads at the end of the base64, and use that spaces as a prefix with the primitive codes.
that’s what I was thinking. Thanks.
For the webhook in is the signature in the *SIGNATURE* HTTP header all that is needed to verify the payload in the body? Is the webhook considered a protected resource that should not have open access? The reason I wonder is because I want to understand what security relationship exists or should exist between the sally controller and the webhook server in order for the communication between sally and the webhook to be considered secure. It seems like in order for the webhook to trust the message coming from sally on presentation or revocation the webhook would have to be running either on the same host as sally or on a highly secured network that sally has access to though not the broader internet. My goal is to layer a simple access control system, or really a business process that depends on secure attribution, on top of the webhook. Does this mean I need to have the program processing the webhook message validate the signature inside the SIGNATURE HTTP header sent to the webhook?
I will have a code example up on my blog soon enough so if this question doesn’t make sense yet then ignore it until I have code posted.
On another note. Richard posted issue on starting witnesses with `kli witness start` . As I played around with that command today attempting to replace my script’s dependency on `kli witness demo` with a succession of `kli init` and `kli witness start` pairs I found a deficiency in `kli witness start`. It doesn’t support `--config-dir` and `--config-file` like `kli agent start` does. So PR to solve this issue. There is one small problem. While the PR I posted functions it makes empty config files in `$CONFIG_DIR/keri/cf` that are unused. I am likely using `Configer` wrong or something to that effect. If someone could point me in the right direction that would be appreciated. The PR solves Richards problem and I was able to replace `kli witness demo` with `kli init` and `kli witness start` though this minor annoyance is the only issue with it.
If you are creating empty files then you aren’t reading the valid ones and not configuring you witness correctly.
rodolfo.miranda
that's why my disk is full of `keri/cf` folders :rolling_on_the_floor_laughing:
Yep
That’s what I thought, though OOBI resolution and configuration worked just fine as well as witnessing of key events. I will take a closer look. I didn’t add much code in 457.
Looks like I was wrong. It was the arguments I was passing in to the `kli init` command that were creating the file in `${CONFIG_DIR}` :
kli init --name wan --salt 0AB3YW5uLXRoZS13aXRuZXNz --nopasscode --config-dir "${KERI_SCRIPT_DIR}" --config-file wan-witness
I was missing a `main/` path segment on the `wan-witness` file I was using.
Due to the default arguments for the `Configer` class it uses the path segment `"main"` when looking for configuration files.
class Configer(filing.Filer):
  ...
  TailDirPath = "keri/cf"
  ...
   def __init__(self, name="conf", base="main", filed=True, mode="r+b",
                 fext="json", human=True, **kwa):
    ...
The problem can be worked around (solved?) by just putting a prefix on the path given to the `--config-file` argument as so:
--config-file main/wan-witness
There are two potential `--base` arguments though only one exists in the CLI interface for `kli witness start` right now. 1. The base directory for the keystore. (in the .keri or /usr/local dir) 2. The base directory for the configuration files. (set to $KERI_SCRIPTS_DIR in the sample scripts)
In the future we could separate them out with another change to the `kli witness start` command by allowing `--base-keystore` and `--base-config` options.
I will post this in the Github Issue.
In light of this <@U024CJMG22J> do you see the PR as ready to merge?
Arguably the PR could be improved by adding in those two suggested base commands, or leaving the existing one as it is and making a new one for the config base.
I will do that if you’d like.
See my comment in the thread above this for an explanation.
I believe the qualification is specifically related to the code, isn't it? To qualify the material, giving it traits of behaviour. The rest of it is just formatting, it doesn't necessarily tell us anything non-quantitative about the material - and thus doesn't qualify it. Someone can correct me if I misunderstand.
<@U024CJMG22J> <@U03EUG009MY> After carefully investigating, I think the issue is not just about how I started the `kli witness`. I tried set up `kli witness demo` remotely at `13.229.205.28` , you can access those 6 witnesses (`wan`, `wil` `wes`, …) , e.g. at However, when I tried to init (which the `oobi` can resolve the above-mentioned 6 witnesses) and incept Alice’s local AID, it failed to retrieve the receipts from any of the witnesses. Therefore, I think the problem of the issue is not about how I set up the witness(es). Is any security configuration needed to be set apart from the `tcp` and `http` ports?
Let's say I'd like to verify a signature in a hotpath. What is the best way to make sure I'm using the latest public key? do I need to make a qry with a `/ksn` route to a witness? The other alternative is to use an oobi backend and get the whole KEL, but it's more involved ( and slower ?) as one needs to traverse the KEL. The reason I'm asking is I'm thinking about usage of keri ids in access tokens, etc.
In cases where a signature is attached to a KERI event, the keys used are the latest signing keys for that KEL. In cases where a signature is attached to something _other than_ a KERI event, it is usually accompanied by a seal in an attachment that specifies the location in the KEL at which is was attached. This way signatures can last beyond rotations. If the signature is supposed to be short lived, the "latest" seal can be used to indicate "current key state". If you are checking for a specific entry in a KEL, you can check your local cache to see if you have _at least_ that sequence number and get the keys from that point in the key state or query one of the AIDs witnesses for a specific sequence number. If you look at `messagize` in `eventing.py` in KERIpy you can see the application of the two types of Seals mentioned above (inside the `if signers` block). If you look at `WitnessInquisitor.query` method in `agenting.py` you can see the options for querying a witness for various conditions related to an AID's key state. For example, you can query by sequence number or by anchor (used for delegation).
<@U024CJMG22J> - thanks a lot! That's really helpful. I'm looking at the situations where a signature will be attached to a non KERI related data. Such as an access to token, or an attribute inside a cert ( which can also carry SPIFFE id and therefore be a SVID). The way I was thinking about it is : I have a vLEI and a role, perhaps with a delegated key for IoT, or API access. After being 'onboarded' ( similar to to providing CA cert) I can then lookup the current key ( short lived case) for the prefix and verify the signature. Or update the local cache periodically. Which means, I guess, that I'll need to attach not only the prefix but a seal, minimally an `sn` of the corresponding establishment event.
Has anyone ever experienced the `kli witness demo` not working properly? A lot of time, I cannot retrieve the `oobi` from the witness at `/controller` URL path I have opened an issue
I experience intermittent problems with `kli witness demo`. Ever since I switched to manual witness management I have not encountered any problems.
<@U03EUG009MY> I noticed that if I do `kli witness start` manually, I will get the *inworking* witness the same as those on `kli witness demo` which `/controller` URL path is NOT accessible i.e. `404` error
I run `kli witness demo` about 100 times a day. Have not seen any problems (or else I would've fixed them. lol). If you aren't getting endpoints for the controller OOBI its likely that you are reading the configuration file. It probably has to do with where you are launching the command from or files changing on your system.
<@U024CJMG22J> OK, will try to find a way resolve this
<@U024CJMG22J> - I've looked at the `WitnessInquisitor.query` and then at its counterpart `Kevery.processQuery` . If I understand the logic correctly ( under the `logs` route), the algorithm doesn't actually return the event that has the given `sn` or the `anchor`. It checks to see if achor can be found and that `sn` is below that of the current state. But then returns the whole KEL anyway:
for msg in self.db.clonePreIter(pre=pre, fn=0):
                msgs.append(msg)
Or is this wrong?
Yes, that is correct. There is nothing in the request to let the witness know what current state the requestor has and any single key event is useless out of order. So the safest thing to return is a stream of the full KEL
Then the response can be received with another `qry` message, `"r":"mbx"` with a `"q":{"topics":{"/receipt"}}` ? Where the actual response will be an SSE event?
<@U024CJMG22J> Thanks for the advice. I found the issue that the `kli witness` process was not terminated properly. Hence, it deleted and created the file at the same time, and caused some of configuration files missing.
<@U024CJMG22J> - question in regards to `qry` messages. The signature for the message ( the one sent via the header), is not current verified by a witness (`Parser.msgParsator` will just ask `Kevery` to prepare response) , but is still required by the `Parser`. As far as I can tell, `Parser` will accept `NonTransReceiptCouples` as a signature ( or any other defined type of signature). So for my 'agent', which will ask witness for a kel/ksn, I can just use a non transferrable prefix and generate a valid signature for it, which I will attach to the HTTP request. I guess I could just attach any valid looking signature , but it feels wrong :slightly_smiling_face: Is this all correct? Is this going to be the behaviour going forward? Do other witness implementations `keriox` (?) behave the same way ?
OK. I've tried the non-transferable prefix & signature approach. It works but causes an error: The code (`Parser.msgParsator`) wants `TransLastIdxSigGroups` , if `NonTransReceiptCouples` are used then `Kevery.processQuery` will receive a parameter `source=None` and the following line: `self.cues.push(dict(kin="reply", src=src, route="/ksn", serder=ksn, dest=source.qb64)` will cause an exception as `source` is `None`. Which basically means that proper interaction via `qry` to `logs` , `ksn` and `mbx` routes should only work for agents/wallets with transferrable prefixes.
Sorry, I don't understand the question. Perhaps you can raise this as a discussion at the next KERI meeting. Might be more efficient to do in person
<@U024CJMG22J> thank you, happy to discuss at the next meeting. Just to clarify. I'm trying to build my own tooling to interact with the Keri/Gleif Infrastructure. A simple use-case I'm trying to get to work is: I trust a controller. That controller signs arbitrary data ( non KERI or ACDC) and sends it over. I want to be able to dynamically get the corresponding public key from a witness and verify the signature. I was assuming that it's possible to use the "api" that witnesses provide ( hence the experiments with the `qry` messages). Since it's not documented ( at least I couldn't find anything) I'm reverse engineering it and trying to verify my assumptions. So it looks like the `qry` based "api" is really designed for wallets that use a transferable prefix. Right? I could, of course, just parse the OOBI KEL, which is equivalent to parsing response to a `"r":"logs"` `qry` .
BTW, how does community approach the question of standardisation of the interface between wallets ( controllers/agents?) and the witnesses? Is there a desire or a recognised need to document it? Just trying to understand if I'm going in a wrong direction here.
`kli agent start`: What value should be placed in the `--controller` argument? The code states:
parser.add_argument('-c', '--controller',
                    action='store',
                    default="E59KmDbpjK0tRf9Rmc7OlueZVz7LB94DdD3cjQVvPcng",
                    help="Identifier prefix to accept control messages from.")
Does this mean that the prefix should be set up prior to starting the agent, so `kli init` and `kli incept` prior to `kli agent start`?
Yeah, I there is definitely a recognized need to document this. I am working on a blog post right now that is my attempt to reconcile the interface between controllers and agents. I am working off of the scripts `issue-xbrl-attestation.sh` and `issue-xbrl-attestation-agent.sh` to guide me as well as reading the code.
And a follow up question, how do I start an agent in secure mode and populate the SIGNATURE header correctly when making requests? I see `kli agent vlei` starts all the agents up in insecure mode. I had a little bit of trouble here with *500 Internal Server Error*s here until I realized that secure mode expects the SIGNATURE header to exist and be valid and I wasn’t sending that header.
rodolfo.miranda
that default value confused me. From the code it seems that if you don't pass `--insecure` you are starting the agent in secure mode and the `--controller` parameter is the prefix to check signatures against. The http server expect signatures in a `SIGNATURE` header. Validation function is
        sig = req.headers.get("SIGNATURE")
        ked = req.media
        ser = json.dumps(ked).encode("utf-8")
        if not self.validate(sig=sig, ser=ser):
            resp.complete = True
            resp.status = falcon.HTTP_401
            return
rodolfo.miranda
As many of us are learning how the agent part of keri works by looking at the code, I propose to start documenting all our findings in a shared document. Then we can ask the creators to review it and publish as "official" documentation.
rodolfo.miranda
the following .md is something I've been working on looking at the code and wireshark captures:
I will read through it and add what I find out.
rodolfo.miranda
As the community grow, I think that we do not only need to document existing "protocols" but also to handle better the creation of new ones.
rodolfo.miranda
Here my initial kick off to research and document:
All interactions between witnesses, agents and a command line like KERIpy should be streamed CESR events. Since we needed to tunnel that over HTTP, we implemented a quick and dirty method of sending one KERI event (or ACDC credential) at a time over HTTP. As you can see from Rodo's wireshark capture, it is a simple post that puts the event in the body and all attachments in an HTTP header. Witnesses and agents will all accept events in this way. All communication is layered on top of this. So we have KERI key events, KERI exn events (which may have other events embedded inside them) and ACDC credentials which are still encoded in CESR and thus sent over HTTP in the same manner.
I would consider this method of "tunneling' over HTTP to be crude and something we want to improve on.
Communication with the Admin interface for the _current_ Agent (from the keripy `kiwiing` module) is a standard REST API with an incomplete signing mechanism for HTTP requests (this is why we never deployed that agent in any cloud environment). That is being replaced by a Signify client and a KERIA agent which has a complete KERI signing mechanism for HTTP requests as well as an entire new API which supports _event_ signing at the edge.
rodolfo.miranda
Thanks <@U024CJMG22J> for the clarifications. Even though its a crude approach, it's working fine and worth documenting, at least informally . Also, I find useful (for my learning) to see the events passing around. I'm following this doc from Sam: but I'm not sure if it covers all events and seals.
rodolfo.miranda
Should we work in a more formal documentation?
> Should we work in a more formal documentation? No, not at all. I think this is a great first step in capturing what is currently happening. I was just adding color commentary to what you are capturing and documenting so we have a group understanding of where we are where we would like to go.
This is really helpful. My blog post will show the current Agent from KERIpy. As soon as I understand the Signify client and the KERIA agent I will write another blog post on how to use that.
Perhaps to clarify the need to a formal documentation. I guess it's an opportunity to also add to the IETF drafts. Specifically, they are missing some of the codes, e.g. `-H` and `-L` in CESR and Proof Signatures respectively ( perhaps it makes sense to reference "extensions" to the tables from the main CESR document). The events section in the KERI IETF draft could benefit from the examples/explanations that are being created as part of this effort. Based on my experience, just reading the spec is not enough. I had to do a lot of reverse engineering to actually understand the meaning and usage of the "internal" messages, such as `qry`, `exn` as well as what various signature attachments mean and how there are used. I guess, we should view the current *keripy* implementation as the de facto definitive source and update the specs accordingly. Is this a correct assumption? In future, an update cycle along with the potential KERI versioning changes can be made to both the ref implementation and the specs. Which would allow implementers to keep up and really position KERI as a standard. Just a thought.
Will the introduction of the new KERIA agent also affect how witnesses interact with the external world? ( `Cesr-Attachment` header, `qry` [logs, ksn, mbx] messages)?
<@U04RNMG8Z51> updates to the interaction with witnesses will not be a direct result of the introduction of the Mark II Agent (in KERIA). However, as we are working on the new agent we will probably investigate improvements to witness comms and propose them over time.
As I mentioned in the ACDC call this morning, last week I merged development into main in KERIpy, updated the version to 1.0 and published a new 1.0 tagged docker image.
Today I have opened a PR against development that contains much of the work I've done to support the upcoming Signify client and Mark II Agent (in KERIA). Nothing in there _should_ be breaking yet, but I wanted to leave the PR open for a bit to allow folks here a chance to comment on it if anything looks concerning. I plan to merge the PR into development branch by the end of the day (PST) unless I hear otherwise.
rodolfo.miranda
In the query message the topics are expressed as:
"topics":{
      "/receipt":0,
      "/replay":0,
      "/reply":0
    },
What does the `0` mean after each topic?
I also saw other queries that express a single topic as `"topic":"challenge"` without the `/` and the `0`.
query messages are used to get values from the Mailboxer. it works like this (at a high-level): • To get KEL data, one should send a `qry` message with either `logs` or `ksn` route (in the `r` key) . • `logs` is a stream of KEL data for the prefix in `"q":{"i":"pref"}` part. • `ksn` is the latest key state for the pref ( same as above) • These messages will be processed and response will be added to the `mailbox` under a topic. ◦ For `qry` messages the topic is `receipt` • To get the response, one needs to follow up with another `qry` message, this time with a route set to `mbx` ◦ The system assumes that the wallet or the agent asking it is stateful, but also accommodates a case where some responses were missed, so all the responses for a prefix will be stored along with the number of the response. ◦ internally the responses are stored as `"prefix"/"topic"` , that's why topic should be `/"topic"` ◦ the number after a `topic` is the index of the response the agent/wallet is interested in receiving. ▪︎ If there are newer responses under the topic, than the witness will respond will all of them, starting with the requested number. ▪︎ So, if the `qry` has a `topic` value set to `0` then witness will stream back ( via SSE ) all known responses for a `topic` ▪︎ Here's an example of a witness receipt response:
id: 14
event: /receipt
retry: 5000
data: {"v":"KERI10JSON000091_","t":"rct","d":"ELdorFEqtQyGObZRWf54IoKQ5zNdSSA0lvk-VN5lllGs","i":"EAR1eNUllla9Y7l0ru0btqGrFoWuhLk8BkMWFUMEljUY","s":"a"}-CABBEXBtyNmAdUiEMsPYamGdMq4TEQfmitcFAyUYcY15Im20BCMbqyPh0dY2n4AyKw7TX2jsixbqX2nIuNey0QsZmMRTsGMulnjkG76EBYlQdD-q7hgkSSXn_xKhIvUkHB9Ni0H
where `id: 14` is the index of the response and  `event: /receipt` is the topic  that the `qry` asked for.
The same mechanism is used to provide a "rendezvous" point for communications between agents ( for a challenge message, multisig, etc). This works via `exn` messages, with the route set to `/fwd`. These messages wrap other messages inside them (under `a` key) and put them into `mailbox` of the corresponding prefix ( `"q":{"pre":"pref"}` under `"q":{"topic":"topic"}` . Then the wallet of the target pref can get these using the `qry` mechanism described above. Topic can be anything and that's how one can add new type of "peer-to-peer" scenarios.
Some topics are standardised. E.g. all `rct` messages will be posted under the `receipt` topic. I haven't' gotten to `replay` or `reply` topics yet. These are related to `Tevery` , which is a class that deals with the Transaction Event Log (TEL). I guess that means credential related messages.
rodolfo.miranda
Thank for all those details. That was really insightful !
rodolfo.miranda
For credentials I found the `/credential` topic that sends `iss` , `vcp` and `vrt`events
This is done.
Is there a way to import/export a private-public key into the KERI key store?
rodolfo.miranda
check `keeping.Keeper`class. For example for retrieving a private key from the public one I used keeper.pris.get(hab.kever.prefixer.qb64).raw
rodolfo.miranda
pris def from the code:
pris (subing.CryptSignerSuber): named sub DB whose keys are public key
            from key pair and values are private keys from key pair
            Key is public key (fully qualified qb64)
            Value is private key (fully qualified qb64)
rodolfo.miranda
and `subing` is a class from DB (LMDB sub-dbs)
nuttawut.kongsuwan
Thank you so much!
I'm working on a little experiment I'm planning on sharing at our next meeting on the following branch: If you look at `src/keri/app/cli/commands/ssh/export.py` you'll see a command line utility for exporting public or private keys as OpenSSH PEM files.
Same technique can be used for any file or for importing key material
nuttawut.kongsuwan
That sounds fantastic. Thank you for sharing with us.
<@U024CJMG22J> Thank you :raised_hands:
rodolfo.miranda
kerissh!! Then we need an ssh server side that can resolve OOBIs
Don't spoil my surprise!!
We had a client ask about document signing with AIDs that are holders of vLEIs but also could authenticate via SFTP without having to maintain 2 accounts. I'm more interested in SSH so I have a working prototype of a server that runs on a Linux VM that will create a local account and populate `authorized_keys` with the public key of any AID that can present a certain credential (any business logic you like around the credential presentation). Then the Watcher integrated in the server will update the SSH keys on any rotation it detects for any AID that has access.
So you get enterprise rotatable SSH access via KERI.
rodolfo.miranda
Nice trick!
Same approach could be integrated with something like Active Directory to insert the public keys in there and let Azure manage the access to VMs for you
And now imagine if the key that has the credential is a multisig with signing threshold of 1 and rotation threshold of 2 with 3 keys. You could gain access to SSH from three separate locations and if one become compromised rotate it out and still have access.
rodolfo.miranda
And I just realized that I have ssh keys in many servers, for many years, with no key rotation ever!
Right???
Same here.
In my humble opinion, the best part is the name... KERI Authorization for SSH or... KASSH
rodolfo.miranda
The security is the security of the weakest point :grinning:
This is so cool.
rodolfo.miranda
what is bad in ssh is that your private keys are in plaintext in the disk. Are you guys using some integration of ssh with a password manager?
<@U03P53FCYB1> Totally agree. Even there is the `passphrase` option in the `ssh-keygen` , most of users do not opt in with the `passphrase` option to protect the private key due to laziness, or some processes (e.g. the automation) does not allow the passphrase to be typed
I want to support the Mark II Agent in KERIA as soon as possible. I just finished writing the code for my blog post against both the KLI and the Mark I agent in KERIpy. How similar would you say the agents are in terms of routes and input parameters or payloads? I will take a look at the code myself today to see what it looks like.
If they are similar enough I will adapt my post to the new Mark II agent. If they are different enough I will follow up later with a supplementary blog post.
They will be significantly different
Thanks
Will a 0.6.9 or 1.0.0 been released to PyPi this week?
We need to remind Sam tomorrow morning at the KERI meeting. The intention is to release 1.0 to PyPi
Will do.
What do the names “curls” and “iurls” mean from the witness configuration file? I see in the `keri.app.happing.py:BaseHab.reconfigure` function the *curls* property of the configuration file is what contains the different protocol URLs for both TCP and HTTP. They are passed to `self.makeLocScheme`, then `self.reply`, then `self.endorse` of an `eventing.reply` of the following dict: `dict(eid=eid, scheme=scheme, url=url)` I am more wanting to understand what all is valid to go into each the “curls” and “iurls” section as well as the thinking behind the names so I can describe things accurately in my blog post.
It appears that each “curl” is turned into a reply message and then signed (self.endorse). I don’t understand reply messages yet and would be happy to read the code to understand them. Just a few pointers to get me going would be great. Reply messages are for key state notice requests, right? Not a part of IPEX since IPEX is for ACDCs, right? Just trying to put two and two together.
And a related question: Since both the `iurls` and `durls` result in he following code then what distinguishes the types of OOBIs that go in each section?
# keri.app.habbing.py:Habery.reconfigure

obr = basing.OobiRecord(date=help.toIso8601(dt))
self.db.oobis.put(keys=(oobi,), val=obr)
rodolfo.miranda
From a previous message from <@U024CJMG22J>: "The OOBIs defined in the configuration under the key `iurls` are OOBIs for resolving key state". And the `curls` are the endpoints to be generated by the agent. My guess: `iurls`= introduction URLs; `curls` = ~configuration~ controller URLs
rodolfo.miranda
and `durls` = Data URLs, that's the one used to retrieve data, such as a schema I think. as "ToDo" :grinning:
The C in curls is for Controller
They are the controller’s urls.
Should we update the `gleif/keri` docker image tag `latest` to match with the version `1.0.0` ?
Hence anyone pulling the docker image `gleif/keri` (without version tagged) will get the latest version `1.0.0`
From what I've seen, `rpy` messages are used for two purposes: 1. for `ksn` responses 2. for the OOBI KELs to indicate approved end-points for the witnesses. As well as a controller's prefix for the eid ( in practice they are the same, but can be different). So one can think of `rpy` messages as a way to communicate state to a controller.
Here's an example of a `ksn` response:
 {'v': 'KERI10JSON0002fa_', 't': 'rpy', 'd': 'EJ6BhTwHQtxtcHREUEHQAl-nFHW2aRU1yCSuehAj-XDe', 'dt': '2023-03-09T17:01:36.116731+00:00', 'r': '/ksn/BEXBtyNmAdUiEMsPYamGdMq4TEQfmitcFAyUYcY15Im2', 'a': {'v': 'KERI10JSON00023f_', 'i': 'EAR1eNUllla9Y7l0ru0btqGrFoWuhLk8BkMWFUMEljUY', 's': '9', 'p': 'EOvy3DL_zbs-fnPxkZ_Hj-0WWboGpchazc7oit1XsmBI', 'd': 'EOA70My4MnN9qvMDvMjfUwSbHIlaD7JXQOLVOOnHjzdp', 'f': '9', 'dt': '2023-03-09T16:59:32.275438+00:00', 'et': 'rot', 'kt': '1', 'k': ['DGk74s2P78p296WsszFZXPYnxc6_gGDWonDeVHpB8BEE'], 'nt': '1', 'n': ['EMLgsS5D0rGd7PI1Mn7wARiiY4tXErwCi0jkJ-IZbPdw'], 'bt': '2', 'b': ['BEXBtyNmAdUiEMsPYamGdMq4TEQfmitcFAyUYcY15Im2', 'BI7jE8sYGKsMoqzdflooeWrhU0Ecp5XJoY4V4cC-zyQy'], 'c': [], 'ee': {'s': '9', 'd': 'EOA70My4MnN9qvMDvMjfUwSbHIlaD7JXQOLVOOnHjzdp', 'br': [], 'ba': []}, 'di': ''}}
Yes, absolutely. I wasn't aware that the GitHub action did not already do this. I'll fix both this morning. Thanks for catching it
Thank you, this is going into my KERI knowledge base :slightly_smiling_face:
You mentioned approved end-points for the witnesses. What is the mechanism for performing an approval? Is the endpoint indicated by the OOBI signed in some way that is verifiable?
petteri.stenius
Is there a way to inspect what "kli agent" and "kli witness" are doing? Sometimes these processes get in a state of consuming significant amounts of cpu and disk I/O. Restarting does not help. I don't know if waiting long enough would help. I have an automated process of setting up a vLEI test environment using the Keri Agent API. In my env the root, ext, qar, lar etc roles are each multi-sig groups of two. I start witnesses with "`kli witness demo`" and agents with "`kli agent vlei`". I have also reproduced this issue by starting witness and agent nodes as separate processes.
Both of these issues have been addressed.
It's this message ( or messages ,1 for each endpoint):
{"v":"KERI10JSON0000fe_","t":"rpy","d":"EL8v2q-zLbCqPV4TX2eHLTlhjlcGxb7JUI_33DZo8Zhl","dt":"2023-02-22T11:54:23.468058+00:00","r":"/loc/scheme","a":{"eid":"BDkq35LUU63xnFmfhljYYRY0ymkCg7goyeCxN30tsvmS","scheme":"http","url":""}}-VAi-CABBDkq35LUU63xnFmfhljYYRY0ymkCg7goyeCxN30tsvmS0BA0fNn0QdsoBXG5B2V6_h-dAfVG5cXm6Hg0pZ0mIT-5nHoQKnDTvt6M95hft0ONsMftVZuG9RuYt0jMlvw2Ea0C
In the CESR attachment you can see that it is signed: `-CAB` - means "_nontransferable identifier receipt couples_" group, `AB` is *1* translated from B64. Then you have the prefix: `*B*Dkq35LUU63xnFmfhljYYRY0ymkCg7goyeCxN30tsvmS` and the signature `*0B*A0fNn0QdsoBXG5B2V6_h-dAfVG5cXm6Hg0pZ0mIT-5nHoQKnDTvt6M95hft0ONsMftVZuG9RuYt0jMlvw2Ea0C`
In this case the signing prefix is the same as the `eid`, meaning that the service certifies its end point.
In the inception message that usually follows, this prefix will be listed as a backer. Closing the loop, so to speak.
Additionally, you can query: and you'll get this:
"v":"KERI10JSON0000fe_","t":"rpy","d":"EH-d4Qv5FH5soBWBB8c-L9L8hi5Ev8kmlUtwiM-zHyGt","dt":"2023-03-23T19:05:02.368840+00:00","r":"/loc/scheme","a":{"eid":"BDkq35LUU63xnFmfhljYYRY0ymkCg7goyeCxN30tsvmS","scheme":"http","url":""}}-VAi-CABBDkq35LUU63xnFmfhljYYRY0ymkCg7goyeCxN30tsvmS0BBPFUllEmlBGspBvv76jhb3PV5Yk8bSHkwmFml0QbAM2O4CVsCXc7NJ46D1LT4Tj336JDg-b4bSXDw3G8JEhsEM{"v":"KERI10JSON000116_","t":"rpy","d":"EA7OwJY_tnAU37LmWKUZ3GeyMhJcWqQNSGWggBHElU6Q","dt":"2023-03-23T19:05:02.369838+00:00","r":"/end/role/add","a":{"cid":"BDkq35LUU63xnFmfhljYYRY0ymkCg7goyeCxN30tsvmS","role":"controller","eid":"BDkq35LUU63xnFmfhljYYRY0ymkCg7goyeCxN30tsvmS"}}-VAi-CABBDkq35LUU63xnFmfhljYYRY0ymkCg7goyeCxN30tsvmS0BAjyDOHQ6pvi7qHbPjma6daLTcz6VqsUoPszbKeEdSEKCkXwDFliSuOYGUCISgmrMt-9iPrFSKDzpI206yzw_QO
These messages is what the init code (`self.makeLocScheme`, etc)  are producing.

What's interesting about this "KEL" for a service is that you can see this bit here:  `"r":"/end/role/add","a":{"cid":"BDkq35LUU63xnFmfhljYYRY0ymkCg7goyeCxN30tsvmS","role":"controller","eid":"BDkq35LUU63xnFmfhljYYRY0ymkCg7goyeCxN30tsvmS"}`   now you know that the wintness prefix is also a controller of this node.
try throwing this in the kli.py file at the top, or changing what you find if something like this exists:
from keri import help
import logging
help.ogler.resetLevel(level=logging.DEBUG)
that's how i debugged through inception and rotation when building them with cesride
the keri import path will be wrong, my test file was in the root of the repo
that’s a useful tidbit. Thanks Jason
I wouldn't recommend that on a witness or agent, the amount of logging that will output will overwhelm.
There is currently no good way to figure out what is happening inside a running witness which is the bane of my existence. Usually it has something to do with escrow processing that takes off for some reason, but I've never had the time to track it down.
We haven't hit it in our production witnesses yet.
petteri.stenius
I was able to capture following from a kli agent consuming lots of cpu and I/O. The kli agent will print this same (or similar) message in an endless loop. I haven't yet had a look a the code in the stack trace
hio: Kevery unescrow failed: Failure satisfying toad=3 on witness sigs=['AACppiQO996JpVKmEipZzV2_gvCX4BWyDuzGuFPOeKsycjwpUECypNeVGtD-i4jCQgTFdflFgjw3o4vYZQAgvm8I'] for event={'v': 'KERI10JSON00013a_', 't': 'ixn', 'd': 'EB7AQInx2MvrOMLawgReZLebOnl2oDRj992En03lrkrm', 'i': 'EIwcbaW8YnLnAbfNpzp47qHmbdsF_NpBDfxOltUXhFyb', 's': '1', 'p': 'EIwcbaW8YnLnAbfNpzp47qHmbdsF_NpBDfxOltUXhFyb', 'a': [{'i': 'ECg-_CjiguYECkbcUbfhY08Ob_DMun8Vv8UMiTi489iO', 's': '0', 'd': 'ECg-_CjiguYECkbcUbfhY08Ob_DMun8Vv8UMiTi489iO'}]}.
Traceback (most recent call last):
  File "/home/uroot/keripy/src/keri/core/eventing.py", line 4775, in processEscrowPartialWigs
    self.processEvent(serder=eserder, sigers=sigers, wigers=wigers, seqner=seqner, saider=saider)
  File "/home/uroot/keripy/src/keri/core/eventing.py", line 3025, in processEvent
    kever.update(serder=serder, sigers=sigers, wigers=wigers,
  File "/home/uroot/keripy/src/keri/core/eventing.py", line 2043, in update
    sigers, delegator, wigers = self.valSigsDelWigs(serder=serder,
  File "/home/uroot/keripy/src/keri/core/eventing.py", line 2290, in valSigsDelWigs
    raise MissingWitnessSignatureError(f"Failure satisfying toad={toader.num} "
keri.kering.MissingWitnessSignatureError: Failure satisfying toad=3 on witness sigs=['AACppiQO996JpVKmEipZzV2_gvCX4BWyDuzGuFPOeKsycjwpUECypNeVGtD-i4jCQgTFdflFgjw3o4vYZQAgvm8I'] for event={'v': 'KERI10JSON00013a_', 't': 'ixn', 'd': 'EB7AQInx2MvrOMLawgReZLebOnl2oDRj992En03lrkrm', 'i': 'EIwcbaW8YnLnAbfNpzp47qHmbdsF_NpBDfxOltUXhFyb', 's': '1', 'p': 'EIwcbaW8YnLnAbfNpzp47qHmbdsF_NpBDfxOltUXhFyb', 'a': [{'i': 'ECg-_CjiguYECkbcUbfhY08Ob_DMun8Vv8UMiTi489iO', 's': '0', 'd': 'ECg-_CjiguYECkbcUbfhY08Ob_DMun8Vv8UMiTi489iO'}]}.
hio: Kever state: Escrowed partially witnessed event = {'v': 'KERI10JSON00013a_', 't': 'ixn', 'd': 'EB7AQInx2MvrOMLawgReZLebOnl2oDRj992En03lrkrm', 'i': 'EIwcbaW8YnLnAbfNpzp47qHmbdsF_NpBDfxOltUXhFyb', 's': '1', 'p': 'EIwcbaW8YnLnAbfNpzp47qHmbdsF_NpBDfxOltUXhFyb', 'a': [{'i': 'ECg-_CjiguYECkbcUbfhY08Ob_DMun8Vv8UMiTi489iO', 's': '0', 'd': 'ECg-_CjiguYECkbcUbfhY08Ob_DMun8Vv8UMiTi489iO'}]}
Yeah, that's what I figured. You have an event that it is trying to process and the escrow processing loop is pretty tight. That logging is exactly what I meant when I said it will bury you if you turn it on.
Is the registry creation event for a controller an `ixn` event?
And on a related note, is the registry created from `kli vc registry incept` connected to a management TEL? Is there a separate TEL for each issued credential?
No, a registry creation event is in a TEL and has the type `vcp`
The TEL event needs to be anchored in a KEL and that anchor can be with an `ixn` event or a `rot` event.
There is one TEL for the registry and one for each credential. The credential TELs have 1 or 2 events only.
`iss` when it it created and if it is ever revoked `rev` is added.
Thanks
petteri.stenius
Is there a way to resolve this? This feels like a race condition that occurs during inception of multi-sig groups. Never happens on a slower i5 computer i have. Frequently happens on a faster i7.
Phil I assume blinded state TEL isn't implemented yet - that seems like it would need more than 2 entries for a cred right?
Or do I misunderstand, I'm going from memory
Correct, it has not been implemented as we have no need for it in GLEIF
So must every TEL event, whether a Management TEL or VC TEL, be anchored to a KEL or are only Management TEL events anchored to the KEL?
That’s interesting. I wonder if it would happen on my M1 Mac. <@U03U37DM125> send me the reproduction steps and I will try this on my machine.
It does not happen on my M1 Mac
All TEL events are anchored in a KEL
Thank you.
Why is TOAD accepted in registry creation by the Mark I agent yet not in the `kli vc registry incept` command ? Is this just an oversight and would you appreciate a Github Issue on this or is this intentional?
If you are creating a registry with its own backers then you need a threshold for that backer set. If this was omitted from the `kli` it does need to be added. GLEIF is not using TEL specific backers so we never needed the threshold.
Thanks. I was comparing the KLI to the Agent API while writing my blog post and encountered this difference and just wanted to know why.
So then, the Agent API supported creating a registry with its own backers and thus needed a TOAD to be passed in.
I see in there is a default selected if `toad` is None:
toad = ample(len(baks))
I wouldn’t rely on anything really supporting separate backers for a TEL. It’s a feature no one has ever used or tested.
Got it. I am more satisfying curiosity right now than building something.
I will make a note of your comment in my blog post. It’s really cool to have comments from an expert in the trenches to add to my post. Thanks a lot.
Deleting (revoking) credentials with The Mark I Agent (deprecated) doesn’t seem to have the equivalent of the `--send` option to `kli vc revoke`. Does this sound right? I would like to help make sure that functionality is in the Mark II Agent.
Correct because there are no issuers using the Mark I Agent.
rodolfo.miranda
What is exactly Mark I Agent that is being deprecated?
The agent that current lives in `keripy` All the code in `kiwiing.py`
rodolfo.miranda
Thanks. The openAPI. So, is the idea to leave only the `kli` in keripy?
Yes, that is exactly right
Question on `kli`: why a rotate operation for a trans prefix with only one backer returns "ERR: Missing prikey in db for pubkey=D ..." ? Right after the inception.
Need more details. Please open an issue
I've decided to debug this a bit, before I open an issue.
It looks like there is a race condition: after successful rotate (`RotateDoer` calls `BaseHab.rotate`), `indirecting.EventDo` invokes `msg = self.hab.query(pre=self.pre, src=self.witness, route="mbx", query=q)` which then causes `BaseHab.endorse` to do `sigers = self.sign(ser=serder.raw, indexed=True)` `BaseHab.sign` by default has `verfers=None` which causes it to use `self.kever.verfers` The problem is that at this point in time ( before even the `rot` message has been sent) `kever.verfers` still has the pre-rotation verkey (the one used for inception), and the `Manager.sign` can't find it anymore, this check fails:
if ((signer := self.ks.pris.get(verfer.qb64,
                                                decrypter=self.decrypter))
                        is None):
                    raise ValueError("Missing prikey in db for pubkey={}".format(verfer.qb64))
`Manager.rotate` has already erased the inception's verkey:
if erase:
            for pub in old.pubs:  # remove prior old prikeys not current old
                self.ks.pris.rem(pub)
`erase` is by default `True`
Debugging asynchronous code is fun :)
I feel that in my soul!
Hi guys, is there anyone who has ever used a remote witness pool? After creating an agent with remote pool configuration, its OOBI URLs should start with the remote pool base URL, I think. But always get `` like this. And this OOBI link isn't resolved. (Actually, I used `ngrok` tunneling and launched the pool and agent as a docker instance respectively.)
Yes, the vLEI ecosystem has several witness pools running abs In production use.
See more information about GLEIF Root AID OOBI and witness infrastructure here
Question/observation re. connection handling for `mbx` `qry` messages sent to the witness. The server responds with a header: `Transfer-Encoding: chunked` and then sends one or more SSEs in a chunk. But it doesn't add last chunk (`0CRLF`) , which causes an off the shelf HTTP1 client to time out. Considering another header, `Connection: close`, the desired behavior is that the client closes the connection after the chunk with the SSEs is received. Correct? Just making sure the intent is not to have the client stay online and listen to more incoming SSEs. Shouldn't the server close the connection itself or at least send the "last chunk" sequence, though?
rodolfo.miranda
I sniffed same behavior. It seems that the agent is reconnecting anytime the query is requested, so in that case the correct way should be that the client close the connection to optimize resources on both ends.
The mailbox server/client in python is the result of trial and error getting SSE events to work from the hio library HTTP code on the client and server along with JavaScript client all while hosting witnesses behind an nginx reverse proxy running in a cloud hosted Kubernetes instance. In no way do I claim that it is technically accurate according to SSE or HTTP specs, its just what we got working under an extremely tight deadline. If anyone knows how to improve it or make it more stable / technically accurate I would LOVE a PR.
rodolfo.miranda
I wonder if the agent actually need SSE for the mbx, or it can just use a standard query/response API.
It certainly does not and as a matter of fact, the current solution looks more like long polling than actually SSE.
rodolfo.miranda
I think that moving out of SSE will simplify deployments in the cloud.
SSE was the bane of my existence during our run to production but Sam was pretty passionate about its usefulness in KERI due to the asynchronous nature of events.
So we "made it work"
petteri.stenius
I was able to reproduce this issue with a bash script On a faster i7 computer this script will always fail, with the agent ending up in the escrow loop seen above. On a slower i5 computer this does not fail. The oobi request at seems to trigger this condition. Adding a small delay before oobi helps avoiding the issue.
nice work at creating reproducible results
Yeah, nice work!
Ok, so this is basically the multisig-agent.sh script changed with the sleeps removed. Since the creation of the events is asynchronous when witnesses are involved, it is not correct to assume that you can or should be able to immediately query for an OOBI for an event that you just requested through the API. Your `wait_receipts` has a bug in it in that it can return success when the controller has received all the event receipts but still has not propagated them to the witnesses. If you really want to wait, you'd have to query all witnesses (or at least the one you want to OOBI against) to make sure they have all the receipts. The other bug here is that the witness should not be resolving an OOBI if it does not have a threshold satisfying compliment of receipts yet. It seems to be, under certain circumstances, returning the inception event with only its own receipt. I've addressed this issue before _somewhere_ but obviously not in OOBI resolution. If you could create an issue against KERIpy, I'd appreciate it. As for the excessive CPU, that has been in the TODO list forever. When we went to production at GLEIF, we metered the escrow processing loop to mitigate the effect of escrowed events on overall performance. I guess that change never made it back to keripy. In addition, all events should time out of escrow so the effects should only be transient. We need to check the timeouts to make sure they are being enforced across all escrows and are of appropriate duration. I will look at these last two issues this afternoon.
I found the resolution to the OOBI resolution problem with `Baser.fullyWitnessed` which is being used in `processQuery` and `findAnchoringEvent` but not in OOBI resolution. So in `ending.py` in the `OOBIEnd.on_get` method we just need to change the following:
        if aid not in self.hby.kevers:
            rep.status = falcon.HTTP_NOT_FOUND
            return
 to

        if aid not in self.hby.kevers:
            rep.status = falcon.HTTP_NOT_FOUND
            return

        kever = self.hby.kevers[aid]
        if not self.hby.db.fullyWitnessed(kever.serder):
            rep.status = falcon.HTTP_NOT_FOUND
            return
you will get a 404 trying to resolve an OOBI for an AID that is not fully witnessed yet.  Then in your script if you won't want to sleep, you can loop on the OOBI resolution request until it returns a 200.  I'll put together the PR this afternoon, no need to create an issue.
My fix for this (along with an annoying fix for codecov because they deleted their PyPi package) has been merged to development branch. <@U03U37DM125> could you rerun your test and confirm that the agent no long spins out of control?
petteri.stenius
Thanks <@U024CJMG22J> for looking into this! Inception stability is something that has bothered me for a while. Yes, I want to get rid of "sleep" and instead have some more reliable source to check when the async operation is completed. The notification pattern of multi-sig and credential operations looks solid. Could there be a notification from single-sig operation too? In my script I'm now waiting for status 200 from the oobi endpoint of the witness
wait_receipts "" $toad
wait_status_ok ""
This does help thanks!

On the dev branch there's some other issue with group inception: `curl -s -X POST ""` always returns 404 Not Found.
I get a "not found" message for
Can we get another KERIpy release any time soon, v1.0.1? Sam’s change in of `coring.Ids.dollar` to `coring.Saids.dollar` broke KASLcred. His change was on April 6th, six days after the 1.0.0 Pypi release on 3/31/23
This should be an issue not a slack message
Done:
Ok, thanks. I'll take a look in the morning to understand the problem.
OK. I've played with the code a bit. I was able to improve the current implementation slightly. Each SS Event is now streamed is a separate chunk ( previously a topic with a 0 index, would cause a bunch of them to be sent as a single chunk). Additionally the timeout on a connection works more reliably. However, with or without these improvements, the current implementation will still not work with a standard JS. `EventSource` uses a `get` on a URI.
I don't know if the community/GLEIF really cares about this, though :slightly_smiling_face: In any case, additional changes can be beneficial not only for hypothetical JS, browser based clients. E.g. we could separate the query submission and SSE consumption, I think it's still a good way to send updates as they come in on a topic. (Unless we want to add GRPC) Here's how that could look like: 1. send the usual `qry` message to a uri ( for compatibility, a separate one, e.g. `/qry/` , `post`) a. do the usual processing: parse out `said` and let `Kevery` handle the query message, post to cues, etc. 2. connect to a an SSE uri, e.g. `/qry/{said}` (with a `get` ) a. server finds the matching `cue` or errors if not found b. server stores the `cue` in an in-memory cache with a timeout-based eviction ( a `Doer` based mechanism?) c. stream the topics (as per the dict in the cue) as they come in ( the current logic) d. time out on the server side, based on a setting (say 30 secs) , close the connection 3. client will reconnect to the same uri with a delay as per `retry` field a. server will return error (404) if the query has timed out in the cache and is no longer findable. If it sounds reasonable, I can prototype further.
The main benefit will be the ability to submit the same or an updated query once in a while and then listen on topics in a separate thread/process. Which will make "watcher" type of clients easier to build, IMHO.
<@U03P53FCYB1> and <@U024CJMG22J> - what do you think?
rodolfo.miranda
I like that idea, and that would help a lot in JS. However I'm still thinking if a regular periodic query-response (with pagination) would be a simpler approach. Where is the asynchronicity produced? at cue processing? is that problematic to handle it synchronous in the query/response call?
<@U03P53FCYB1> - The asynchronicity in the code itself is not a big issue. The current SSE implementation deals with it already by waiting for a "response" `cue` coming from `Kevery` and then iterating over the `mbx` , which has a `TopicIterator` ( each topic - `receipt`, `replay`, `reply`, etc - is stored under `{pre}/{topic}` key along with the sequence number, or index, of the response/message). The more interesting aspect of the code is that the current approach allows to simply listen to the new query response messages for all the topics specified in the original query as they come in. Provided, the client stays online. So for certain use-cases, such as "watcher", it's beneficial. It's not difficult to add an additional endpoint where a simple request/response mechanism will give you a collection of messages under a topic. Either as a single json (an array of responses, or response index keys, etc) or with a pagination. The latter, though, is not as straightforward to implement. Currently, the `qry` message includes a topic along with it's index, which will be used to provide either the last response if the index matches, or all the responses starting from the specified index, if the index is not the last one. Paginated request will likely specify the desired index and number of responses in the URL itself. So a careful design is needed here to avoid ambiguity. Perhaps a paginated request should only specify a number of events per response and, possibly an index offset, indicating the starting point from the topic's index specified in the `qry` .
A more radical approach would be to create a completely new mechanism, bypass the `qry` parsing altogether and send response from the DB itself. Basically do what `Kevery.processQuery` does. I'm not sure what the original motivation for the `qry` messages was in the first place, though. So it's difficult for me to say, whether or not it might break some design assumption on the "systems" level. After digging around the code and playing with various query approaches, my impression is that the signed `qry` message enforces a certain amount of trust establishment between witnesses and external controllers/agents (prefix and it's backers have to be known to a witness). There's also a special use-case for `exn` forward messages where a response to a `qry` coming (signed by) from a prefix which belongs to another known witness/backer will be sent to one of the backers of that prefix. I guess this is used for multi-sig, etc.
In any case, I'd be super interested to see what approach GLIEIF would like to take to simplify the development of applications and uses-cases using or extending their KERI infrastructure.
rodolfo.miranda
Thanks <@U04RNMG8Z51> for the explanation. You are correct, for watchers or any persistent agent the SSE is a nice approach. I was thinking on the mobile user, that wakes up from time to time. In that case a push notification procedure plus the query async api would be interesting to implement. Probably those concepts are already in the design of KERIA + Signify. <@U024CJMG22J> that would be nice to discuss in your IIW KERIA session if you agree.
What again is the definition and distinction between the following terms in KERIpy? • curls (controller URLs?) • durls (data URLs?) • iurls (?)
I just had to scroll up. It looks like the following is accurate (correct me if not): • curls = controller URLs • iurls = introduction URLs (The OOBIs defined in the configuration under the key `iurls` are OOBIs for resolving key state) • durls = data URLs
The developers at GLEIF appreciate that some of the paradigms used in KERIpy, particularly the use of co-routines can be difficult for the average developer to understand. We are actively working to remedy this situation by providing signify libraries that remove most of the complexities of keripy by exposing a more typical API.
<@U03P53FCYB1> I created the following repo: With the hopes that you could move the code from in keripy that is cardano specific and then we can get 418 merged with the changes you need to make it work.
Oh crap, I just noticed you created your own repo at RootsID for it. Let me know if you wanna leave it there or put it in WebOfTrust.
rodolfo.miranda
I realized that now there is a `\receipts` api to query for receipts. Is it intended only for KERI agents? or is it going to be used by all keripy agents?
It is now being used by all AIDs creating inception events.
It short circuits the chicken and egg scenario for end point definitions.
rodolfo.miranda
ok. It's a breaking change, meaning that new agents can not interact with old backers. Should a 1.1 version be published?
That should not be the case. Witnesses support the old version too
rodolfo.miranda
I mean a new AID (from keria for example) would not be able to use an old witness, right?
rodolfo.miranda
We moved the cardano backer to the WebOfTrust repo
Is used anywhere in keripy for variables? I saw it used in a bunch of function signatures and in some tests, though not in any variable declarations. Does the Python library implementation of LMDB use memoryview and is that why it’s all over the place in function signatures in `dbing.LMDBer`?
The linked SO post has a really good rationale for why to use memoryviews for speed. The `suffix` and `unsuffix` functions convert everything to `bytes` anyway, so why use `memoryview`s at all?
I think that’s all LMDB stuff
Sam’s documentation in dbing.py is really good.
I’d have no idea what was going on without it. I’d have to read the LMDB docs.
Why are there two state functions in keri.vdr.eventing? and Is VCState used for transaction events [iss, rev] and state used for registry events [vcp, vrt]?
Yep, that’s it, just read the code. Except that neither state function is used for the “vrt” event.
nuttawut.kongsuwan
I have trying to use kli command to do multisig delegation where there are two delegators and two delegates, based on the demo files and . During delegates’ rotation, my command line asks for confirmation from the delegators, i.e. `Waiting for delegation approval...` . Then, I tried to use the following commands for the delegators to confirm the rotation event from the delegates. `kli delegate confirm --name delegator1 --alias delegator --interact --auto` `kli delegate confirm --name delegator2 --alias delegator --interact --auto` The above do not seem to work. May I ask if anyone knows to do it?
Search for ‘tel.md’ in the source files, you may find some more info there
What is the actual behaviour? Just reading on my phone I wonder if it’s a bad thing to use the same alias for two delegators
nuttawut.kongsuwan
I use 4 terminals for the 4 parties (plus one terminal for ruing `kli witness demo`). When two terminals are used for rotation by delegates and the other two for confirmation by delegators. The terminals freeze on `Waiting for delegation approval...` .
I see in the script there is a sleep 3... did you try increasing this value?
Perhaps on your setup it needs more time
Okay I'm trying this out for the learning. I ran `kli witness demo` and then `multisig-delegate-delegator.sh` but the script can't seem to find witness endpoints. Is `witness demo` sufficient to spin up listening witnesses? The AID matches (it's salted with a fixed salt I believe)
I fixed it, was simply a misconfiguration and silent failure.. I was defining KERI_SCRIPT_DIR wrong
Does the script work fine for you? And just manual terminal work that is failing?
nuttawut.kongsuwan
Thanks a lot for helping out. The script work for me as well. However, the script ends at the delegated inception. I tried to extend the script with delegated rotation, and that is where I am stuck!
nuttawut.kongsuwan
This is my script that I put at the end of `multisig-delegate-delegator.sh`.
kli rotate --name delegate1 --alias delegate1
kli query --name delegate2 --alias delegate2 --prefix EJ97lUuRH3xz0OMKhdMAU6V2TcSF9X6m1CKyIbIUcRxp
kli rotate --name delegate2 --alias delegate2
kli query --name delegate1 --alias delegate1 --prefix ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33
kli multisig rotate --name delegate1 --alias delegate --smids EJ97lUuRH3xz0OMKhdMAU6V2TcSF9X6m1CKyIbIUcRxp:1 --smids ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33:1 --isith '2' --nsith '2' --rmids EJ97lUuRH3xz0OMKhdMAU6V2TcSF9X6m1CKyIbIUcRxp --rmids ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33 &
kli multisig rotate --name delegate2 --alias delegate --smids EJ97lUuRH3xz0OMKhdMAU6V2TcSF9X6m1CKyIbIUcRxp:1 --smids ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33:1 --isith '2' --nsith '2' --rmids EJ97lUuRH3xz0OMKhdMAU6V2TcSF9X6m1CKyIbIUcRxp --rmids ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33 &
kli delegate confirm --name delegator1 --alias delegator --interact --auto &
kli delegate confirm --name delegator2 --alias delegator --interact --auto &
Just noticed that for the `kli init` with same `salt` will result in a different `Identifier` if I change the `witness(es)` for the `kli incept`. Is this intended?
nuttawut.kongsuwan
Yes. AID is the SAID of the inception event that contain witnesses.
Thanks, that’s helpful
Ohhh I see now, let me try what you tried
echo "sleeping for 9 seconds"
sleep 9
echo "done sleeping"

kli rotate --name delegate1 --alias delegate1
kli query --name delegate1 --alias delegate1 --prefix EJ97lUuRH3xz0OMKhdMAU6V2TcSF9X6m1CKyIbIUcRxp
kli rotate --name delegate2 --alias delegate2
kli query --name delegate2 --alias delegate2 --prefix ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33

kli multisig rotate --name delegate1 --alias delegate &
pid=$!
PID_LIST+=" $pid"
kli multisig rotate --name delegate2 --alias delegate &
pid=$!
PID_LIST+=" $pid"

echo "sleeping for 3 seconds"
sleep 3
echo "done sleeping"

kli delegate confirm --name delegator1 --alias delegator --interact --auto &
pid=$!
PID_LIST+=" $pid"
kli delegate confirm --name delegator2 --alias delegator --interact --auto &
pid=$!
PID_LIST+=" $pid"

wait $PID_LIST

kli status --name delegate2 --alias delegate

echo "Script complete"
this is where I'm at (the end of the script). You can see it failing at the rotation step now; I don't think it should be necessary to specify much.

I debugged a bit and came up with this output:
❯ kli multisig rotate --name delegate1 --alias delegate                                
merfers=['DM1Jx2rAFxQVAH6YnOnbFYFZQhvH_SGAEU-IcP5jtuIS', 'DHpw3FYAVqy8BfPgtRvUGWW3-xQgOCPCr9iJbV_tdR9c']
migers=['ENoxYsw70vvIRev8OfIIwR2lquUDQ5aXhlJQwHLEC6Up', 'EMaaE21SEeOdiClrM3EVLlO21C2YlYXUI-l7bzG8pEKa']
pdigs: ['ENoyJBvokuT6C-7JhH-QOMha6h8BTLM65LUqzqZ9cACp', 'ENEN1eZ3ghV193kXF3l5VddHETEWt-gwpbmjOLx3mO2v']
diger: ENoyJBvokuT6C-7JhH-QOMha6h8BTLM65LUqzqZ9cACp
pdigs: ['ENoyJBvokuT6C-7JhH-QOMha6h8BTLM65LUqzqZ9cACp', 'ENEN1eZ3ghV193kXF3l5VddHETEWt-gwpbmjOLx3mO2v']
diger: EMaaE21SEeOdiClrM3EVLlO21C2YlYXUI-l7bzG8pEKa
ERR: invalid rotation, new key set unable to satisfy prior next signing threshold
which came from this debug code:

❯ git diff --minimal ../src/keri/app/habbing.py
diff --git a/src/keri/app/habbing.py b/src/keri/app/habbing.py
index b6d5694e..75d6b664 100644
--- a/src/keri/app/habbing.py
+++ b/src/keri/app/habbing.py
@@ -1169,6 +1169,8 @@ class BaseHab:
         indices = []
         for idx, diger in enumerate(kever.digers):
             pdigs = [coring.Diger(ser=verfer.qb64b, code=diger.code).qb64 for verfer in verfers]
+            print(f"pdigs: {pdigs}")
+            print(f"diger: {diger.qb64}")
             if diger.qb64 in pdigs:
                 indices.append(idx)
 
  ~/github.com/WebOfTrust/keripy/scripts on   development !5 ?2                                           keripy  3.0.0 at  07:39:41
❯ git diff --minimal ../src/keri/app/cli/commands/multisig/rotate.py
diff --git a/src/keri/app/cli/commands/multisig/rotate.py b/src/keri/app/cli/commands/multisig/rotate.py
index eeb536be..da185436 100644
--- a/src/keri/app/cli/commands/multisig/rotate.py
+++ b/src/keri/app/cli/commands/multisig/rotate.py
@@ -198,6 +198,8 @@ class GroupMultisigRotate(doing.DoDoer):
 
         prefixer = coring.Prefixer(qb64=ghab.pre)
         seqner = coring.Seqner(sn=+1)
+        print(f"merfers={[merfer.qb64 for merfer in merfers]}")
+        print(f"migers={[miger.qb64 for miger in migers]}")
         rot = ghab.rotate(isith=self.isith, nsith=self.nsith,
                           toad=self.toad, cuts=list(self.cuts), adds=list(self.adds), data=self.data,
                           verfers=merfers, digers=migers)
It looks like there are two sets of digers considered here, and one isn't being respected. Do they need to be merged? I don't know how multisig works. Threshold is 2, and there are two verfers there that satisfy one of each from the two sets of digers.
Also you can say something like `export DEBUG_KLI=1` to get stack traces when you run kli and it fails
oh actually
I'm reading this wrong aren't I, the debug `diger` is the correct one from the KEL, as indicated here:
{
 "v": "KERI10JSON000249_",
 "t": "dip",
 "d": "EOL2umo-DHgO9t22LR_iwmiR_cfsF531hcCh-zZ0p0gL",
 "i": "EOL2umo-DHgO9t22LR_iwmiR_cfsF531hcCh-zZ0p0gL",
 "s": "0",
 "kt": "2",
 "k": [
  "DFnhkUJRim8dlcXVHHIki3ObI37qwaZgKFyiHY2VOJsJ",
  "DHpw3FYAVqy8BfPgtRvUGWW3-xQgOCPCr9iJbV_tdR9c"
 ],
 "nt": "2",
 "n": [
  "ENoyJBvokuT6C-7JhH-QOMha6h8BTLM65LUqzqZ9cACp",
  "EMaaE21SEeOdiClrM3EVLlO21C2YlYXUI-l7bzG8pEKa"
 ],
 "bt": "3",
 "b": [
  "BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha",
  "BLskRTInXnMxWaGqcpSyMgo0nYbalW99cGZESrz3zapM",
  "BIKKuvBwpmDVA4Ds-EpL5bt9OqPzWPja2LigFYZN2YfX"
 ],
 "c": [],
 "a": [],
 "di": "EK7j7BobKFpH9yki4kwyIUuT-yQANSntS8u1hlhFYFcg"
}
two loops, two digers ouput, match this key event.
wait a tick
{
 "v": "KERI10JSON000159_",
 "t": "icp",
 "d": "EJ97lUuRH3xz0OMKhdMAU6V2TcSF9X6m1CKyIbIUcRxp",
 "i": "EJ97lUuRH3xz0OMKhdMAU6V2TcSF9X6m1CKyIbIUcRxp",
 "s": "0",
 "kt": "1",
 "k": [
  "DFnhkUJRim8dlcXVHHIki3ObI37qwaZgKFyiHY2VOJsJ"
 ],
 "nt": "1",
 "n": [
  "ENoyJBvokuT6C-7JhH-QOMha6h8BTLM65LUqzqZ9cACp"
 ],
 "bt": "1",
 "b": [
  "BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha"
 ],
 "c": [],
 "a": []
}
ahhh I think I see what we need t odo
since we rotated this key we need to specify an older key
actually
seems like delegate1 just doesn't know the keystate of delegate2
and vice versa
within the context of the delegate1 keystore, i notice the correct key being used for that delegate but the old key being used for the other delegate
so we're probably missing a step
or maybe i still misunderstand
i think this is probably the explicit command to rotate:
kli multisig rotate --name delegate1 --alias delegate --smids EJ97lUuRH3xz0OMKhdMAU6V2TcSF9X6m1CKyIbIUcRxp:1 --rmids EJ97lUuRH3xz0OMKhdMAU6V2TcSF9X6m1CKyIbIUcRxp:0 --smids ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33:1 --rmids ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33:0
but I am met with this:
ERR: non-existant event 1 for signing member ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33
so I think we just need to get the event into the right place
because
I can find this if I query the other aid:
{
 "v": "KERI10JSON000160_",
 "t": "rot",
 "d": "EEYwYx51kWPPElKP8cOpedXVrtwKRb50p84aPiMPBzgZ",
 "i": "ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33",
 "s": "1",
 "p": "ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33",
 "kt": "1",
 "k": [
  "DE9mOlFzNf4x0AnpMwMPsK2BSE-uPgkUH_lk7ApPqix4"
 ],
 "nt": "1",
 "n": [
  "ELfAmhw_c9B8Up1_FytoQMRFB5CyQzKZfycWqgUfaAID"
 ],
 "bt": "1",
 "br": [],
 "ba": [],
 "a": []
}
so sn 1 exists for ELZyC....
so i fixed part of this
with this code:
kli query --name delegate1 --alias delegate --prefix ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33
kli query --name delegate2 --alias delegate --prefix EJ97lUuRH3xz0OMKhdMAU6V2TcSF9X6m1CKyIbIUcRxp
if you inject that before the multisig rotates but after the rotates/queries, it will fetch the correct key events to understand how to build the event
then you can simply do
kli multisig rotate --name delegate1 --alias delegate &
pid=$!
PID_LIST+=" $pid"
kli multisig rotate --name delegate2 --alias delegate &
pid=$!
PID_LIST+=" $pid"

wait $PID_LIST
PID_LIST=""
however everything seems to wait at the end
it can't find any escrows
when confirming
i am debugging why this is
So turns out that on `main` (<- important) using this code:
wait $PID_LIST
PID_LIST=""

kli multisig rotate --name delegate1 --alias delegate &
pid=$!
PID_LIST+=" $pid"
kli multisig rotate --name delegate2 --alias delegate &
pid=$!
PID_LIST+=" $pid"

sleep 5

kli delegate confirm --name delegator1 --alias delegator --interact --auto &
pid=$!
PID_LIST+=" $pid"
kli delegate confirm --name delegator2 --alias delegator --interact --auto &
pid=$!
PID_LIST+=" $pid"

wait $PID_LIST
things seem to work:

❯ kli status --name delegate2 --alias delegate
Alias: 	delegate
Identifier: EOL2umo-DHgO9t22LR_iwmiR_cfsF531hcCh-zZ0p0gL
Seq No:	1
Delegated Identifier
    Delegator:  EK7j7BobKFpH9yki4kwyIUuT-yQANSntS8u1hlhFYFcg ✔ Anchored

Group Identifier
    Local Indentifier:  ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33 ✔ Fully Signed

Witnesses:
Count:		3
Receipts:	0
Threshold:	3

Public Keys:	
	1. DM1Jx2rAFxQVAH6YnOnbFYFZQhvH_SGAEU-IcP5jtuIS
	2. DE9mOlFzNf4x0AnpMwMPsK2BSE-uPgkUH_lk7ApPqix4
Oh wait, should there be receipts?
I think I made it farther, and perhaps to the end, but I have to do some other work for a while. Let me know if this was helpful Nuttawut or if there is anything else I can do when I'm available again
I'm a bit confused because this was printed by the script:
Waiting for witness receipts for EOL2umo-DHgO9t22LR_iwmiR_cfsF531hcCh-zZ0p0gL
Witness receipts complete, EK7j7BobKFpH9yki4kwyIUuT-yQANSntS8u1hlhFYFcg confirmed.
Delegate EOL2umo-DHgO9t22LR_iwmiR_cfsF531hcCh-zZ0p0gL rotation event committed.
Makes me think the receipts are just in the wrong store
Ah, maybe it is talking about receipts for the interaction event
made a bit more progress
❯ kli multisig rotate --name delegate1 --alias delegate
Rotated local member=EJ97lUuRH3xz0OMKhdMAU6V2TcSF9X6m1CKyIbIUcRxp, waiting for witness receipts
Sending local rotation event to 1 other participants
Sending rotation event to 1 other participants
Waiting for other signatures...
We are the witnesser, sending EOL2umo-DHgO9t22LR_iwmiR_cfsF531hcCh-zZ0p0gL to delegator
Waiting for delegation approval...
this is where it's hanging. curiously:
Rotated local member=ELZyCjnSL2Haors35LKM19T4qWT4K8Gfz1FPDD9oJN33, waiting for witness receipts
Sending local rotation event to 1 other participants
Sending rotation event to 1 other participants
Waiting for other signatures...
Waiting for delegation approval...
Delegation approval for EOL2umo-DHgO9t22LR_iwmiR_cfsF531hcCh-zZ0p0gL received.
Waiting for witness receipts for EOL2umo-DHgO9t22LR_iwmiR_cfsF531hcCh-zZ0p0gL
I think what's happening is that the two are competing
i'll confirm how the approval is returned, if it is returned to `delegate` or to `[delegate1, delegate2]`
i think i figured it out, confirming
#!/bin/bash

set +euo
export DEBUG_KLI=1

echo "Creating delegate's first local identifier in delegate1 keystore"
kli init --name delegate1 --salt 0ACDEyMzQ1Njc4OWxtbm9aBc --nopasscode --config-dir ${KERI_SCRIPT_DIR} --config-file demo-witness-oobis
kli incept --name delegate1 --alias delegate1 --file ${KERI_DEMO_SCRIPT_DIR}/data/delegate-1.json

echo "Creating delegate's second local identifier in delegate2 keystore"
kli init --name delegate2 --salt 0ACDEyMzQ1Njc4OWdoaWpsaw --nopasscode --config-dir ${KERI_SCRIPT_DIR} --config-file demo-witness-oobis
kli incept --name delegate2 --alias delegate2 --file ${KERI_DEMO_SCRIPT_DIR}/data/delegate-2.json

echo "Creating delegator's first local identifier in delegator1 keystore"
kli init --name delegator1 --nopasscode --config-dir ${KERI_SCRIPT_DIR} --config-file demo-witness-oobis --salt 0ACDEyMzQ1Njc4OWdoaWpdo1
kli incept --name delegator1 --alias delegator1 --file ${KERI_DEMO_SCRIPT_DIR}/data/delegator-1.json

echo "Creating delegator's second local identifier in delegator2 keystore"
kli init --name delegator2 --nopasscode --config-dir ${KERI_SCRIPT_DIR} --config-file demo-witness-oobis --salt 0ACDEyMzQ1Njc4OWdoaWpdo2
kli incept --name delegator2 --alias delegator2 --file ${KERI_DEMO_SCRIPT_DIR}/data/delegator-2.json


echo "Sharing OOBIs between delegate's two local identifiers"
kli oobi resolve --name delegate1 --oobi-alias delegate2 --oobi 
kli oobi resolve --name delegate2 --oobi-alias delegate1 --oobi 
echo "Sharing OOBIs between delegator's two local identifiers"
kli oobi resolve --name delegator1 --oobi-alias delegator2 --oobi 
kli oobi resolve --name delegator2 --oobi-alias delegator1 --oobi 

# In 2 delegator terminal windows run the following
kli multisig incept --name delegator1 --alias delegator1 --group delegator --file ${KERI_DEMO_SCRIPT_DIR}/data/multisig-delegator.json &
pid=$!
PID_LIST+=" $pid"

kli multisig incept --name delegator2 --alias delegator2 --group delegator --file ${KERI_DEMO_SCRIPT_DIR}/data/multisig-delegator.json &
pid=$!
PID_LIST+=" $pid"

# Wait for the multisig delegator to be created
wait $PID_LIST

# Delegator does not need an oobi for delegate.
kli oobi resolve --name delegate1 --oobi-alias delegator --oobi 
kli oobi resolve --name delegate2 --oobi-alias delegator --oobi 

# Run the delegate commands in parallel so they can collaborate and request delegation
kli multisig incept --name delegate1 --alias delegate1 --group delegate --file ${KERI_DEMO_SCRIPT_DIR}/data/multisig-delegate.json &
pid=$!
PID_LIST+=" $pid"

kli multisig incept --name delegate2 --alias delegate2 --group delegate --file ${KERI_DEMO_SCRIPT_DIR}/data/multisig-delegate.json &
pid=$!
PID_LIST+=" $pid"

# Wait for 3 seconds to allow the delegation request to complete and then launch the approval in parallel
sleep 3

kli delegate confirm --name delegator1 --alias delegator --interact --auto &
#kli multisig interact --name delegator1 --alias delegator --data @${KERI_DEMO_SCRIPT_DIR}/data/multisig-delegate-icp-anchor.json &
pid=$!
PID_LIST+=" $pid"

kli delegate confirm --name delegator2 --alias delegator --interact --auto &
#kli multisig interact --name delegator2 --alias delegator --data @${KERI_DEMO_SCRIPT_DIR}/data/multisig-delegate-icp-anchor.json &
pid=$!
PID_LIST+=" $pid"

wait $PID_LIST
PID_LIST=""

kli multisig rotate --name delegate1 --alias delegate &
pid=$!
PID_LIST+=" $pid"
kli multisig rotate --name delegate2 --alias delegate &
pid=$!
PID_LIST+=" $pid"

kli delegate confirm --name delegator1 --alias delegator --interact --auto &
pid=$!
PID_LIST+=" $pid"
kli delegate confirm --name delegator2 --alias delegator --interact --auto &
pid=$!
PID_LIST+=" $pid"

sleep 5

kli multisig continue --name delegate1 --alias delegate
kli multisig continue --name delegate2 --alias delegate

wait $PID_LIST

kli status --name delegate2 --alias delegate

echo "Script complete"
works for me!

the key steps were the multisig continues at the end.
also <@U04H17ZEX9R> be sure to use `main` if you didn't catch that earlier in the thread
nuttawut.kongsuwan
<@U056E1W01K4> Thank you so much for digging so deep to figure out how to do that. It is really interesting to see the debugging process you carried out. When you said `main`, are you talking about the main branch of keripy?
Correct, `main` branch of `keripy`
No problem, I learned a lot!
nuttawut.kongsuwan
If I remember correctly `kli query` is unavailable in the current main branch, but it came back again in the dev branch. Do you know why is that?
I do not know, but it didn't seem to be necessary
nuttawut.kongsuwan
Thanks!
NP. I had to use it with dev to get the KELs synced.
nuttawut.kongsuwan
I just checked your code. I seem I only missed these two lines `kli multisig continue --name delegate1 --alias delegate` `kli multisig continue --name delegate2 --alias delegate` With these two, the code just works. Thanks a lot!
nuttawut.kongsuwan
For some reason, my multisig delegate did not pick up confirmations from the delegators automatically and need the `kli multisig continue`.
Why are both properties `regk` and `regd` set to the same value? This value is the derived AID from the ked of the registry inception event.
prefixer = Prefixer(ked=ked, code=code, allows=[MtrDex.Blake3_256])  # Derive AID from ked and code
ked["i"] = prefixer.qb64  # update pre element in ked with pre qb64
ked["d"] = prefixer.qb64
I want to understand the point of having two properties that are the same value. I imagine there is a case where those two values differ.
Both of these properties are used for the `Seal` anchored into the KEL for a controller:
rseal = SealEvent(registry.regk, "0", registry.regd)
They will only be the same value on inception
nuttawut.kongsuwan
I have been studying KLI with the demo scripts in Keripy. However, all examples seem to use witnesses, so I wasn’t even aware of the direct mode. May I ask how KLI can be used in the direct mode, especially for multisig and delegation?
We said on the call that support for direct mode is in KERIpy, not exposed in the kli. You would have to _write code to_ launch an agent with a `Directant` listening on a port that can accept TCP connections. Witnesses currently do this is configured to do so with the command line argument `--tcp`.
nuttawut.kongsuwan
Thank you for clarifying that!
Should ACDC standardize the attributes `NotBefore` and `NotAfter` similarily to `X.509` certificates ?
nuttawut.kongsuwan
Something like “expirationDate” and “validFrom” from W3C VC.
nuttawut.kongsuwan
Putting the question in another way — is there a security reason (or practical reason) for the ACDC spec not to support “ValidFrom” and “NotAfter” normatively. These two fields seem to be commonly supported by other credential specifications.
No, this would not make sense because ACDCs, by default, are valid until revoked. Adding any properties like `ValidFrom`, `NotAfter`, or similar attributes to every single ACDC schema would add baggage and make each ACDC schema unnecessarily complicated. Furthermore, those two attributes are best defined, as needed, by each use case where they are implemented. Those attributes made more sense in an X.509 world where keys do not rotate and certificates die out. The reason, from a security perspective, those certificates needed `ValidFrom` and `NotAfter` is twofold 1. no certificate revocation protocol 2. no key rotation Since there was no revocation capability for the X.509 certificates in order to have good security hygiene, good security practice, the certificates themselves needed to have validity and expiry dates stamped in the certificates from the beginning. This is due to the underlying characteristic of X.509 certificates that the keys can’t rotate and thus will become less secure as time goes on. If X.509 certificates could rotate their keys and tie the X.509 certificate to the latest key in a sequence of keys then they wouldn’t need `ValidFrom` nor `NotAfter`. To do this you would need a key event log, and here we come back to KERI :slightly_smiling_face:
In essence, `ValidFrom` and `NotAfter` were needed because of limited key security options inherent in the `X.509` protocol. With KERI there are many security options and keys can be rotated as often as needed. So you can see, there is no need to add `NotBefore`, `NotAfter`, nor `ValidFrom` into *every single ACDC* because they don’t make sense, by default, in the KERI world. They just add baggage and complexity. Furthermore we shouldn’t add them just because W3C credentials have them. The other W3C credential formats have issues, similar to X.509 certificates. They are monkey patching their credentials with attributes like `NotBefore`and `NotAfter`to try and make up for bad design.
I shouldn’t say there is no certificate revocation for X.509. There are things like certificate revocation lists (CRLs), so they do have that capability. Software just has to know to check. That sort of workflow is easier with KERI through credential revocation, though one could say they are similar in that regard.
Maybe it makes more sense to say that certificate revocation is more complicated in X.509 than credential revocation in KERI.
I see the point now. Thanks for this super detailed explanation <@U03EUG009MY>
nuttawut.kongsuwan
<@U03EUG009MY> I love this part --> “They are monkey patching their credentials with attributes like `NotBefore`and `NotAfter`to try and make up for bad design.” :rolling_on_the_floor_laughing: Richard and I had this conversation yesterday. I actually had a hunch that `NotBefore`and `NotAfter` do not quite fit in the ACDC spec, but I could not articulate the reason. So thank you for an awesome explanation! It is a great insight that outlines the fundamental difference between how X.509 and ACDCs are designed and used.
nuttawut.kongsuwan
This gives me an insight into why X.509s fail to gain significant adoption as a tool for individuals to sign electronically, despite their dominance as SSL certificates. It emphasizes why KERI and ACDCs are much more suitable than X.509 for e-documents that are meant for human-to-human, like what GLEIF did with their annual report. I suppose key rotation and anchoring SAIDs in TEL also make the time-stamping service optional for long-term use, which is a staple in the EU advanced electronic signature.
On timestamping, timestamps are unreliable because system clock can be tampered with. Timestamps, to be reliable, have to be based on some sort of consensus from a DLT system. I suspect such consensus-based timestamps will be a key long term feature of blockchain systems. Sequencing from a TEL, or from a KEL, is a reliable source of ordering. I suspect that would better achieve the EU’s ordering goal they hope to achieve with timestamps, though if the EU has a standard timestamp consensus network then that would also suffice.
nuttawut.kongsuwan
Our company actually provides a traditional time-stamping service in Thailand. Currently, most of the time-stamping applications still rely on the centralized/administrative model where the time-stamping providers must be certified.
Fascinating. I hadn’t heard of those sorts of systems.
What is the reason behind adding the `vn` field and removing the `v` field from the `KeyStateDict`?
Key State is no long a KERI event message, but a payload that is to be embedded in other KERI event messages like `rpy` messages. This was, in point of fact, the only way KSNs were being used so nothing changed in the usage. Sam's upcoming refactor of CESR revealed that KSNs were a special case that had a `d` field that was not the SAID of the event but the SAID of the latest event. Since we weren't ever streaming them independently anyway, this change made a lot of sense.
I just listened to one of the recordings and was wondering something. <@U024CJMG22J> you spoke about a local AID and a group AID that the local AID is part of. However technically this is not right, as we clarified a few days ago, so is my understanding right, that the local AID is just use to aggregate all the keys of a user so it is easier to retrieve them when a group member wants to add another member to the group (by just specifying the local AID and not all the keys a user controls)?
Local AIDs contribute their keys to the group AIDs. When we say "they are a part of" we are speaking from the perspective of the local database. In order to facilitate the `kli` functionality we keep track of which local AID contributed its keys to the group AID so we know what to keys to use when signing for the group. But externally, no one else needs to know the relationship between the local AID and the group
Thank you for clarifying!
just to be clear, this likely means not sharing the KELs of the local AIDs publicly right? otherwise you could correlate the two
It means it doesn't matter. The holders of the local AIDs can or cannot share them. Its use case driven, imo
if they explicitly want non-correlation, they'd need to, is what I mean
sorry, wasn't very clear
Yes, that is correct
thanks!
But in the current implementation you still need to know the local user AID because this is used to add someone to a group, right?
Or is there another way to identify a person?
In the current implementation the group members communicate with each other via their local AIDs so yes, those internal to the group must know each other's local AIDs. No one external to the group needs to though. In future implementations, the public keys to use could be communicated out of band to any KERI protocol thus allowing the local AIDs to remain anonymous to the other group members, or even non-existant.
Does anyone have a proper way to run the `witness container` properly using this ?
Okay, thank you!
try this:
docker run -it gleif/keri-witness bash -c 'pip install --upgrade keri && kli witness start -a witness'
it upgrades keri to 1.0 before running the witness
the -it is so you can press ctrl-c
if you don't need it, ignore
after the -a
you can use any name
or random value
thanks <@U056E1W01K4>. I have also write my docker file to run witness container as well.
yes, and make it upgrade keri when it builds
if you are rolling your own
I use `gleif/keri:1.0.0` as base image already in the docker file.
right
So I think I do not need to update it.
ahhhh
i will try that image to ensure it works as i suggested
oh i see
sorry i thought you meant the image for the witness
you mean the version of keri
no need to upgrade then
is what you're saying
Thanks for the confirmation
you can probably simplify what I wrote to this:
docker run -it gleif/keri:1.0.0 kli witness start -a witness
i am confirming but need to clean up docker, i ran out of space
sure, will try this command since it is a lot shorter than mine.
it doesn't work in the keri:1.0.0 image that i can tell
like
i am figuring out what is missing
looks like a syscall is failing
`ERR: /usr/local/var/keri/ks/jason: Function not implemented` when i use name jason
so it's trying to do something to the keystore and missing something in the os it needs I believe
FROM gleif/keri:1.0.0

SHELL ["/bin/bash", "-c"]
EXPOSE 5631
EXPOSE 5632

WORKDIR /keripy

RUN mkdir -p /usr/local/var/keri

COPY witness-config.json /keripy/keri/cf/witness-config.json
COPY witness-incept-config-sample.json /keripy/witness-incept-config-sample.json
COPY docker_startup.sh ./docker_startup.sh

RUN chmod +x ./docker_startup.sh

ENTRYPOINT ["./docker_startup.sh"]
I have another `bash` script to help me with the docker startup. Hence, it is different from those on the `gleif/keri` docker hub
ah
either way the keri image should run for me
and it doesn't seem to
i am installing libsodium by hand to see if it resolves it
building quite slowly in the x86 environment on my arm machine
yeah, I have to switch from Mac M1 to my Intel PC to work on keri docker image.
didn't resolve anything
it's somethign to do with the store accessing the filesystem
i'm rebuilding the image for aarch64
using this command from an up to date keripy development branch:
docker build -f images/keripy.dockerfile . -t keripy:development
I have no authority to push anywhere but you can do the same
i want to see if the behaviour is the same in a container that matches the host architecture
if this works (i'm using the newest development code) I'll try main
it runs
❯ docker run -it --rm keri:development bash
bash-5.1# kli witness start -a witness
Witness witness : BPDZDw0xhC3FAx_kwQhI6OHaQr1Jzvw5zkjRGjZS_vuD
success!
❯ docker run -it --rm keri:main kli witness start -a witness
Witness witness : BPDZDw0xhC3FAx_kwQhI6OHaQr1Jzvw5zkjRGjZS_vuD
I do notice, however, that the identifier for the same alias is identical in two independent deployments. Does this mean the keys of all witnesses, as implemented, are knowable if the alias is knowable? <@U024CJMG22J>? I forget if I've asked this

But it seems that I could forge receipts - is that exploitable? I mean, generally you are figuring out the actual witness address through an out of band means and then asking it explicitly for its receipts right?
That docker contain is in no was intended for production use yet. We just haven't had the time to spend on it. If deploying a witness in production I recommend creating the database and AID beforehand to use salts or random key generation and passcode if appropriate.
Thank you Phil! It was unclear to me
I wrote a `Dockerfile` for spinning up the witness. It expose the port `5631` for HTTP, and `5632` for TCP. Maybe it's useful for some people who are trying to run the witness(es)
We ran some witnesses from commit using a passcode (21 character passcode generated by `kli passcode generate`). But when we are trying to run those witnesses using the keripy code using the same passcode, it's giving us error "Valid passcode required, try again". Does anyone know why this is happening? Old version's passcode is not recognized by the latest code.
daniel.hardman
<@U024CJMG22J>: does this ^^ have anything to do with my question to you about differences in passcode validation between keripy and keria? Or, <@U03P53FCYB1> or <@U0474LZ0ZLG>, could this have anything to do with a change in keria that I heard you discussing, that was going to be a breaking change in some contexts?
rodolfo.miranda
I'm not aware of any change that can affect the passcode used.
This is not related to the passcode, just the way the actual error is manifesting itself. What's happening is the Habery tries to load records from the database that don't match new schema requirements in the `development` branch. An exception is raised which is caught and assumed to be a bad passcode so it prompts for the correct one. I've been afraid for some time now that changes to the development have broken schema and that a migration "script" would be required. This confirms it.
I wonder ... if we can calculate all controlling keys by knowing the path, why do we still store all private keys on the agent instead of just storing the seed for the salter on the agent?
We never store the private keys on the agent
Ever
> The encrypted private key and salts are then stored on a remote cloud agent that never has access to the decryption keys.
Sorry, I should have clarified. I meant the encrypted private keys.
> The Salty Key algorithm is used to create a hierarchical deterministic key chain for each AID by generating a unique random salt for each AID and stretching the salt using Argon2 with a `path` that is calculated from the AIDs index relative to all other AIDs and the key index calculated by the total number of signing and rotation keys over the lifetime of the AID. > The salt for each AID is encrypted with the X25519 encryption key generated from the passcode and stored on the server with other AID metadata, including the AID index and current key index. We generate one salt per AID and then we encrypt and store these salt on the server. 1. Is my understanding correct that in fact we only store the encrypted salts and not the encrypted private keys (as mentioned in signify-ts readme)? 2. Why do we generate one salt per AID? In theory it should be possible to use one salt for all AIDs that are generated. 3. Is salt really the best wording here? I think seed is actually the right word.
rodolfo.miranda
1- correct, it's the encrypted bran in qb64 2- I think having the two options is convenient
So, then, are all keys used by a KERIA agent usable only on direct authorization from the controlling keypair used in memory with Signify? This is how I think it works, I just want to be sure. I haven’t mapped everything out in the codebases to be 100% certain that this is how things work. I’ll get there, just haven’t spent the time yet.
Both algorithms (salty and randy) technically seed key generation algorithms - just with different sources - so the name Seedy Key may not make sense to differentiate. For 2, what do you mean? Are you making a distinction between the input salt (used repeatedly from what I see) and the path (specific to each key)? Both contribute to the seed value if I'm not mistaken. Oh I see what you mean - if goes on to say 'the salt for each AID' and maybe in that case you are saying it should say seed. *TL;DR*: I think Leo means that the second paragraph should say 'The seed for each AID...'., not that we should be calling them Seedy Keys. I think he's likely correct.
I thought more
The AID isn't seeded, a private key is, I was conflating the two. I'd have to study the algorithm but 'salt' may still be the correct term here. I think the fact that there are two levels of salt and in two adjacent paragraphs they are not differentiated may pose a learning issue though.
And now that I understand that I think I see Leo's point about wondering why we use a salt per AID, not just a path and the same salt
(each key in the aid would just append a different suffix for key generation and slot # or whatnot)
also when i just said 'key generation' i meant within the sequence of key lists in an AID, not 'generating a key'
Yeah, exactly.
Okay, ignoring the salt vs. seed discussion I think it would be easier to have one salt/seed for all keypairs of a controller. Example: 1. We generated a random salt and use that as a parameter for a salter 2. We have some given AID or we want to create a new AID (doesn't matter, what matters is that we create controlling keypairs) 3. We use the stretching function (argon2) of the salter and pass it a path of `<signify:controller:<index_of_aid_in_wallet>:<index_of_key (e.g. 0 for current and 1 for next>>` 4. we do step 3 for all controlling keypairs (no matter what AID) Btw. if in step 3 we don't yet know what the aid will look like because we use the keys for inception we can still assume what position the aid will have in the wallet.
Or maybe I misunderstood something and this is actullay not easier. Anyway, I am very interested to find out why we use a salt/seed for each aid instead of just having one salt/seed as described above.
Maybe <@U024CJMG22J> could help to answer this?
rodolfo.miranda
we actually create the salty aids with the random salt, a stem (that is a prefix string), a pidx (AID index) and kidx (keys index, need to consider rotations and number of keys), like `path=stem+pidx.toString(16)+kidx.toString(16)`
Yes, I understood this, however why do we use a salt per aid instead of using one salt for all aids?
To avoid forcing a salt rotation nightmare. If we use one salt to encrypt as well as the salt for all the AIDs, even a prophylactic rotation of the salt (let alone a compromise forced rotation) would require all AIDs to be rotated. That seemed like a usability mess if we forced that behavior. Instead, we allow for a new salt for every AID that is different than the Salt used for encryption but also allow anyone who wishes to, to pass in the Salt for each AID so they can reuse if they choose to. So a rotation of your passcode (the encryption Salt) only requires re-encryption of all the passcode salts, or random keys.
Okay, I already assumed that but technically the salt is only at risk of being exposed if it gets unencrypted and that's the case if someone knows the encryption keypair. But if someone knows the encryption keypair he can decrypt all salts anyway so there is not really added security, right?
In that case we would need to rotate all AIDs anyway.
charles.lanahan
Maybe there's a clear answer somewhere that I can't seem to find but why is the default for witnesses to open an HTTP port and a TCP port? Is there anything specifically special about either of these or is it just for redundancy of protocol choices?
charles.lanahan
Why is the passcode referred to as `bran` in the code. Looking at the code I can see its combining the passcode and the salt to stretch etc..., but I was wondering where the term came from? I can't seem to find it via kerrise or google.
I believe the entomology of that term has something to do with the relationship between a “seed” and a “bran” that has nothing to do with cryptography. But that might be a good trivia question for Sam during a call.
rodolfo.miranda
When an agent is created with config file, is it possible for example to update it's `curls`? I think the config is just read once at creation time, so the next start doesn't take effect.
charles.lanahan
yeah the config is only read in at startup. If you want to update its curls you have to use another mechanism.
charles.lanahan
I don't know how to do that part though, maybe someone else knows how to dynamically update the controller listeners.
rodolfo.miranda
Thanks. In my case I'm just looking for a way to update the curls that are shown in the oobi of the agent. I think a db update will work, but wondering if there's other way already implemented that I'm missing.
andreialexandru98
Can anyone point me to an example of a chained acdc and what that looks like? :pray: :pray:
If you run the `issue-xbrl-attestation.sh` script from keripy you'll be a bunch of vLEI credentials issued to the AIDs in the script all of which are chained together. After the script is finished, you can run `kli vc list` for any of the AIDs with the `--verbose` option to get a dump of the json of each of the credentials.
andreialexandru98
Thank you!
There’s also one in the abydos-tutorial repository. The KASLCred library helps you define your own chain of credentials.
petteri.stenius
About Endpoint Authorization. Could somebody explain this concept and what it is used for? The more recent kli demo scripts have added "kli ends add" and "kli multisig ends add" etc. Many of the signify client demo scripts have "addEndRole" etc. I found but it's not really explaining any use cases... Thanks :)
amin.benmansour.10
I am sorry for not having an explanation for endpoint authorization but I have seen this section and that's super interesting. I wonder whether this has been discussed in KERI meetings previously
Screenshot_20230824_011229.png
amin.benmansour.10
I assume DOOBI is the `durl` OOBIs in config files but I never expected we can use DIDComm and DID Resolver with OOBIs!
What are good livenessProbe and readinessProbe endpoints for KERIA? Is a GET on `./spec.yaml` the best we have right now? I can use that, it’s just a lot of data for a healthcheck. I’d be happy to submit a PR with a simple `/health` endpoint if that would be useful and there’s nothing else planned.
The spec functionality is actually hosted on the wrong endpoint right now and must be moved so I wouldn't rely on that. There are no other endpoints available currently that would serve this purpose.
If we were to add a simple `/health` which endpoint would it make the most sense on, the http (3902) or boot (3903)? I assume it wouldn’t make sense on the admin (3901) since that one has the Signify signature validation middleware.
Here’s a sample PR:
There isn't a simple answer I'm afraid.
It’s a really dumb healthcheck and doesn’t tell you anything other than that the server is up
We have to decide what liveness and readiness mean. You have custodial wallets so are you checking that you can just get to the service or that specific wallets are available. Probably worth discussing next week.
I’ll bring it up then. There may be multiple levels of a health check, one just for checking that the Agency and other machinery is functioning, and one that is wallet specific. We could add path exclusion to the signify signature validation middleware for only the higher level health check.
Any ideas on why the keri image is around 1.87GB? I’d like to trim that as much as possible and am curious if anyone has had success minimizing that size. The base `python3.10.4-alpine3.16` image used by the keripy/images/keripy.dockerfile is only 46.9MB. Then, after installing the following packages,
bash \
alpine-sdk \
libffi-dev \
libsodium \
libsodium-dev
 the Rust toolchain, and a `pip install` then the size baloons to 1.87GB.

It doesn’t matter too much once you push your initial image to your container registry because you hardly ever change the lower layers of your image and most often change only the topmost layer of source code.
Mostly curious about how small we can get this image.
Hello, can I ask some newbie questions about KLI.. Q1. I'm able to create AIDs using "kli init" and "kli incept", and I'm also able to retrieve/view some useful information about the AID using commands such as the following, they all work fine:
kli export --name .. --alias ..
kli oobi resolve --name .. --oobi ..
kli kevers --name .. --prefix ..
In some of the returned data, I can see the inception event ("t":"icp"). Is there also a way to retrieve 1. A Key State Notice ("t":"ksn"), and 2. The entire KEL associated with the AID?
Q2. I can see there is a command "query" on the "development" branch.. Its description says "Request KEL from Witness". But when I try it, I get this error, what does it mean?
Checking for updates...
ERR: 'Serder' object is not subscriptable
nuttawut.kongsuwan
To get the whole KEL, you can use `kli status --name ... --alias .. --verbose`
Q3. I'm also trying to play with "kli rotate", but haven't been able yet to make it work. I don't want to do multisig (yet) or change witnesses or anything like that, just rotate a single key. Is there documentation or an example anywhere on how to do this? To be honest, there are a lot of things that are not easy to find documentation for. E.g. I think I can more or less imagine what "isith", "nsith", "icount", "ncount", "toad", etc. mean, but it's not really explained in the or the .
nuttawut.kongsuwan
For key state notice, unfortunately, I don’t know.
Thanks that looks useful.. What's the difference between "kli status" and "kli kevers"? The output looks pretty much the same.
this might be helpful. Click the “more” open for all the terms mentioned and where they occur the first time. The links to the glossary per term and link to point in video: Use KERISSE for all other terms: and the hourglass in the upper right corner.
nuttawut.kongsuwan
I am not so sure about that. My guess is that `kli status` shows the controller’s own KEL while `kli kevers` shows others’ KEL that have been resolved by OOBIs. You might need someone else to confirm this.
nuttawut.kongsuwan
I think this is what you are looking for.
nuttawut.kongsuwan
Note you need to `source` this script before running the demo-script.
nuttawut.kongsuwan
You can see their brief description by `kli incept -h`
Screenshot 2566-09-02 at 07.55.44.png
nuttawut.kongsuwan
‘toad’ is a tricky one. In short, it is the minimum number of receipts from witnesses that the controller needs to get before accepting accountability over the KEL.
nuttawut.kongsuwan
for simple rotation, you can also do this
kli init -n abc --nopasscode
kli incept -n abc -a def -tf true -t 0 -ic 1 -s 1 -nc 1 -x 1
kli rotate -n abc -a def
Screenshot 2566-09-02 at 08.00.17.png
nuttawut.kongsuwan
-n short for --name -a short for --alias
nuttawut.kongsuwan
I hope this helps! :innocent:
nuttawut.kongsuwan
I am not so familiar with `kli query`. I believe it is used to request other controllers’ KEL when you are doing a multisignature scheme.
nuttawut.kongsuwan
I am not sure what your error means. My guess is that your kli controller has not resolved other controllers’ OOBIs yet, so it doesn’t know what to do.
nuttawut.kongsuwan
I just tried playing with `kli kevers` myself. I tried init and incept two controllers, called issuer and holder. If I use `kli status -n issuer` and `kli kevers -n issuer --prefix {issuer's aid}` then I get the same result. I think `kli kevers` is useful when issuer resolves holder’s OOBI and then see holder’s KEL — `kli kevers -n issuer --prefix {holder's aid}`
I know how to fix this Kent
Let me play around. I can probably drop it to 300-400
Maybe less
You almost always want to use multi-stage builds for docker images. Building the falcon wheel took longest (why is falcon in requirements.txt on the dev branch?):
❯ docker build . -t keri:local -f images/keripy.dockerfile
[+] Building 206.7s (22/22) FINISHED                                                                              
 => [internal] load .dockerignore                                                                            0.0s
 => => transferring context: 2B                                                                              0.0s
 => [internal] load build definition from keripy.dockerfile                                                  0.0s
 => => transferring dockerfile: 853B                                                                         0.0s
 => [internal] load metadata for                                   0.8s
 => [builder  1/13] FROM   0.0s
 => [internal] load build context                                                                            0.1s
 => => transferring context: 62.96kB                                                                         0.0s
 => CACHED [stage-1 2/5] RUN apk add alpine-sdk                                                              0.0s
 => CACHED [stage-1 3/5] RUN apk add libsodium                                                               0.0s
 => CACHED [builder  2/13] RUN apk update                                                                    0.0s
 => CACHED [builder  3/13] RUN apk add bash                                                                  0.0s
 => CACHED [builder  4/13] RUN apk add alpine-sdk                                                            0.0s
 => CACHED [builder  5/13] RUN apk add libffi-dev                                                            0.0s
 => CACHED [builder  6/13] RUN apk add libsodium                                                             0.0s
 => CACHED [builder  7/13] RUN apk add libsodium-dev                                                         0.0s
 => CACHED [builder  8/13] RUN curl  -sSf | bash -s -- -y                                0.0s
 => CACHED [builder  9/13] WORKDIR /keripy                                                                   0.0s
 => CACHED [builder 10/13] RUN python -m venv venv                                                           0.0s
 => CACHED [builder 11/13] RUN pip install --upgrade pip                                                     0.0s
 => [builder 12/13] COPY . /keripy                                                                           0.1s
 => [builder 13/13] RUN . ${HOME}/.cargo/env && pip install -r requirements.txt                            204.4s
 => [stage-1 4/5] COPY --from=builder /keripy /keripy                                                        0.3s 
 => [stage-1 5/5] WORKDIR /keripy                                                                            0.0s 
 => exporting to image                                                                                       0.4s 
 => => exporting layers                                                                                      0.4s 
 => => writing image sha256:cd177c28ad656b174ee509b85e8d995ad6d805f8a26bb1a6348ef5333505ccbe                 0.0s 
 => => naming to                                                                 0.0s 
❯ docker image ls keri:local
REPOSITORY   TAG       IMAGE ID       CREATED         SIZE
keri         local     cd177c28ad65   6 seconds ago   357MB
Oh it's not quite working yet. I'll put a PR up when it runs
Acutally I improved a bit:
❯ docker image ls keri:local
REPOSITORY   TAG       IMAGE ID       CREATED          SIZE
keri         local     8950793de5c9   16 seconds ago   327MB
❯ docker run --rm keri:local                              
usage: kli [-h] command ...

options:
  -h, --help  show this help message and exit

subcommands:

  command
    agent
    challenge
    contacts
    delegate
    did
    ends
    escrow    Initialize a prefix
    export    List credentials and check mailboxes for any n ...
    incept    Initialize a prefix
    init      Create a database and keystore
    interact  Create and publish an interaction event
    kevers    Initialize a prefix
    list      List existing identifiers
    local
    mailbox
    multisig
    nonce     Print a new random nonce
    oobi
    passcode
    query     Request KEL from Witness
    rename    Change the alias for a local identifier
    rollback  Revert an unpublished interaction event at the ...
    rotate    Rotate keys
    saidify   Saidify a JSON file.
    salt      Print a new random passcode
    sign      Sign an arbitrary string
    ssh
    status    View status of a local AID
    vc
    verify    Verify signature(s) on arbitrary data
    version   Print version of KLI
    wallet
    watcher
    witness
I made kli the entrypoint in the example, so you can mount a directory for the storage and then just issue commands
The key principles here are: • only copy what you need just before you need it, to take advantage of maximum caching during dev builds • use multiple stages to reset the intermediary/temp/dev files required for installs
One thing that didn't feel right was installing sodium-dev in the second stage - why can't we just install libsodium?
When you start getting fancier, you make base images of to start your builds and avoid doing the package installations
Speeds up CI considerably
A multistage build is what I tried unsuccessfully to pull things out of. I tried copying over site-packages after a build stage, and I would reinstall libsodium, yet I would still get errors on libsodium not found when I would do “kli version” I’ll read your PR real quick
❯ docker run --rm keri:local version            
1.0.0
The sodium dependency is binary
You need to install it with apk
the thing is, you shouldn't need the dev headers etc
To simply use it
Only to build against it
So when the libsodium python wheel builds, it needs the headers
Then you should be able to copy the venv as I have done
And use regular libsodium
but I found this not to be the case
Ah, you went one step further than I did, copying over the src/ directory.
But only src
If you copy the whole keripy dir
the images get copied in
which means the the dockerfile that just changed
gets pulled in too (very meta)
which breaks caching
and forces you to rebuild the python deps EVERY time
which is insanity
also the makefile command explicitly uses no-cache
which is probably unnecessary
I bet it was a safety
i rarely actually need to apply no-cache, only when something breaks
the other trick i used
was to put the venv bin dir in the path directly
so that you don't need to mess around
Oh, interesting, I didn’t catch that issue on pulling in the Dockerfile. Thanks for that. I did wonder why it was building the deps every time. Got bit by that —no-cache as well
You could probably make the image a bit smaller by using a non-python alpine base and installing exactly what you needed
But that's likely unnecessary for nwo
330mb is decent
Yeah, that’s pretty good. I appreciate your PR, speeds things up for my team.
No problem!
Sweet! Got me a small image:
<imgname> ... 9a6c44aff17f   29 seconds ago   321MB
Well, smaller
Yes, this is super helpful, really appreciate you taking the time!
Other than adding a witness to the keria bootstrap JSON config file is there anything else that needs to be done to tell KERIA about a witness? I’m getting a `MissingEntryError`for a witness I told KERIA about that is running in a Kubernetes pod on my machine. I exposed the appropriate ports, 5642 (http) and 5632 (tcp), with my K8s Deployment and also with a K8s Loadbalancer service. The error looks as follows:
keri.kering.MissingEntryError: unable to query witness BHOcmjmmDVU0LiK94GB19sTUfh0flyELkEKUvWI9qxWe, no http endpoint
I assume network visibility is working because I can issue an HTTP request to `127.0.0.1:5642/receipts` and get a response from the Ioflo server.
Does it depend on a config file you missed in your docker build?
The image PR I made assumed you’d mount any config needed
Yes, I am mounting my witness config file into my witness container. I am running KERIA outside of a container on my Mac. The witness I’m running in Docker Desktop.
The KERIA config file has an OOBI in it to my one witness I’m running in K8s
Why would `kli init` be asking for a passcode when I specified `--nopasscode`?
Because there is already a database existing where you are trying to create a new one?
yes, creating it from scratch.
Maybe Docker Desktop didn’t clean up the PersistentVolume like I assumed it did when I deleted and recreated it. When I did `rm -rfv /usr/local/var/keri` then I got a different error, `ERR: Attempt to make Hab with unopened resources.` This is within a Kubernetes pod and the config file is mounted in as a volume, with 0777 permissions, which I thought would work. When I copy the config file I mounted in and retry the command then `kli init` and `kli witness start` work just fine.
I think what is happening is that the mounted config file is causing the error `ERR: Attempt to make Hab with unopened resources.` on first start, then K8s auto restarts the pod because it failed, and then when I exec in there it sees the database is already existing and thus asks for a passcode
likely something to do with file permissions on my config mount.
I’ll try copying the file from a configmap to a mounted volume and see if that helps.
KERIA Issue for liveness and readiness check health endpoints:
KERIpy Issue for liveness and readiness check health endpoints for Witnesses and Watchers:
amin.benmansour.10
Thank you for point out to it!. I just was digging in code if there was any liveness and readiness check health endpoints in KERIA!
By adding a simple `RUN mkdir /keripy/src` I was able to move the `COPY src/ src/` into the final image rather than the *builder* image which means that you have near instantaneous builds for simple code changes. Prior to moving `COPY src/ src/` into the last image build then every simple code change necessitated a rebuild of all dependencies because the `COPY src/ src/` was above the `pip install` RUN command.
niiice
genius
charles.lanahan
what was the doc name that described the "habbing/haber" "doing/doer" nomenclature used in keripy? I had it book marked somewhere but I can't seem to find it at the moment.
nuttawut.kongsuwan
These?
charles.lanahan
<@U04H17ZEX9R> well I meant like the explanation for the naming convention. I felt like I read some document describing the `-er` vs `-ing` convention that I can't find anymore.
nuttawut.kongsuwan
Please let me know if you found them! I would be interested as well.
charles.lanahan
Another documentation question. I swear I read some documentation that went through `iurl` `curl` and some others that I've forgotten or that described the keripy config files and now I can't find that either. Does that exist somewhere or did I just imagine it?
charles.lanahan
Or do all the variations just exist in this file (not necessarily this snippet)?
daniel.hardman
<@U05H2PS5U6Q> and <@U04H17ZEX9R>: The material you want was covered on Jan 24, 2023. Unfortunately, the meeting notes say "English semantic naming by Sam (see recording)" directly below a line that says "No recording made". The correct way to solve this problem, IMO, is not to have another presentation on the subject in a community meeting; it is to have a 1-page writeup on the subject, checked in to the codebase that uses the convention. The code and the explanation for the code belong together. Let's ask for this on the next community meeting.
daniel.hardman
Is it expected that keria unit tests are failing? I have a completely fresh fork (no mods by me) on a generic dev environment, and this is what I'm seeing:
(venv) daniel@erisabeta2:~/code/keria$ pytest tests/
================================================= test session starts ==================================================
platform linux -- Python 3.10.6, pytest-7.4.2, pluggy-1.3.0
rootdir: /home/daniel/code/keria
collected 43 items

tests/app/test_agenting.py ........                                                                              [ 18%]
tests/app/test_aiding.py ...........                                                                             [ 44%]
tests/app/test_basing.py F                                                                                       [ 46%]
tests/app/test_credentialing.py ....F                                                                            [ 58%]
tests/app/test_grouping.py ..                                                                                    [ 62%]
tests/app/test_httping.py .                                                                                      [ 65%]
tests/app/test_indirecting.py .                                                                                  [ 67%]
tests/app/test_notifying.py ..                                                                                   [ 72%]
tests/app/test_presenting.py .F.                                                                                 [ 79%]
tests/app/test_specing.py .                                                                                      [ 81%]
tests/core/test_authing.py ..                                                                                    [ 86%]
tests/core/test_httping.py ..                                                                                    [ 90%]
tests/end/test_ending.py ..                                                                                      [ 95%]
tests/testing/test_testing_helper.py ..                                                                          [100%]

======================================================= FAILURES =======================================================
_____________________________________________________ test_seeker ______________________________________________________

helpers = <class 'keria.testing.testing_helper.Helpers'>, seeder = <class 'keria.testing.testing_helper.DbSeed'>
mockHelpingNowUTC = None

    def test_seeker(helpers, seeder, mockHelpingNowUTC):
        salt = b'0123456789abcdef'

        with habbing.openHab(name="hal", salt=salt, temp=True) as (issueeHby, issueeHab), \
                habbing.openHab(name="issuer", salt=salt, temp=True) as (issuerHby, issuerHab), \
                helpers.withIssuer(name="issuer", hby=issuerHby) as issuer:

            seeker = basing.Seeker(db=issuerHby.db, reger=issuer.rgy.reger, reopen=True, temp=True)

            seeder.seedSchema(issueeHby.db)
            seeder.seedSchema(issuerHby.db)

            with pytest.raises(ValueError):
                seeker.generateIndexes(said="INVALIDSCHEMASAID")

            indexes = seeker.generateIndexes(QVI_SAID)

            # Verify the indexes created for the QVI schema
            assert indexes == ['5AABAA-s',
                               '5AABAA-i',
                               '5AABAA-i.5AABAA-s',
                               '4AAB-a-i',
                               '4AAB-a-i.5AABAA-s',
                               '4AAB-a-d',
                               '5AABAA-s.4AAB-a-d',
                               '5AABAA-i.4AAB-a-d',
                               '5AABAA-i.5AABAA-s.4AAB-a-d',
                               '4AAB-a-i.4AAB-a-d',
                               '4AAB-a-i.5AABAA-s.4AAB-a-d',
                               '5AABAA-s.4AAB-a-i',
                               '5AABAA-i.4AAB-a-i',
                               '5AABAA-i.5AABAA-s.4AAB-a-i',
                               '4AAB-a-i.4AAB-a-i',
                               '4AAB-a-i.5AABAA-s.4AAB-a-i',
                               '6AACAAA-a-dt',
                               '5AABAA-s.6AACAAA-a-dt',
                               '5AABAA-i.6AACAAA-a-dt',
                               '5AABAA-i.5AABAA-s.6AACAAA-a-dt',
                               '4AAB-a-i.6AACAAA-a-dt',
                               '4AAB-a-i.5AABAA-s.6AACAAA-a-dt',
                               '5AACAA-a-LEI',
                               '5AABAA-s.5AACAA-a-LEI',
                               '5AABAA-i.5AACAA-a-LEI',
                               '5AABAA-i.5AABAA-s.5AACAA-a-LEI',
                               '4AAB-a-i.5AACAA-a-LEI',
                               '4AAB-a-i.5AABAA-s.5AACAA-a-LEI']

            # Test that the index tables were correctly ereated
            assert len(seeker.indexes) == 29

            indexes = seeker.generateIndexes(LE_SAID)

            # Test the indexes assigned to the LE schema
            assert indexes == ['5AABAA-s',
                               '5AABAA-i',
                               '5AABAA-i.5AABAA-s',
                               '4AAB-a-i',
                               '4AAB-a-i.5AABAA-s',
                               '4AAB-a-d',
                               '5AABAA-s.4AAB-a-d',
                               '5AABAA-i.4AAB-a-d',
                               '5AABAA-i.5AABAA-s.4AAB-a-d',
                               '4AAB-a-i.4AAB-a-d',
                               '4AAB-a-i.5AABAA-s.4AAB-a-d',
                               '5AABAA-s.4AAB-a-i',
                               '5AABAA-i.4AAB-a-i',
                               '5AABAA-i.5AABAA-s.4AAB-a-i',
                               '4AAB-a-i.4AAB-a-i',
                               '4AAB-a-i.5AABAA-s.4AAB-a-i',
                               '6AACAAA-a-dt',
                               '5AABAA-s.6AACAAA-a-dt',
                               '5AABAA-i.6AACAAA-a-dt',
                               '5AABAA-i.5AABAA-s.6AACAAA-a-dt',
                               '4AAB-a-i.6AACAAA-a-dt',
                               '4AAB-a-i.5AABAA-s.6AACAAA-a-dt',
                               '5AACAA-a-LEI',
                               '5AABAA-s.5AACAA-a-LEI',
                               '5AABAA-i.5AACAA-a-LEI',
                               '5AABAA-i.5AABAA-s.5AACAA-a-LEI',
                               '4AAB-a-i.5AACAA-a-LEI',
                               '4AAB-a-i.5AABAA-s.5AACAA-a-LEI']

            # Assure that no knew index tables needed to be created
            assert len(seeker.indexes) == 29

            # test credemtial with "oneOf"
            seeker.generateIndexes(said="EBfdlu8R27Fbx-ehrqwImnK-8Cm79sqbAQ4MmvEAYqao")

>           issuer.createRegistry(issuerHab.pre, name="issuer")

tests/app/test_basing.py:107:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <keria.testing.testing_helper.Issuer object at 0x7f8814da6290>
pre = 'EIqTaQiZw73plMOq8pqHTi9BDgDrrE7iE9v2XfN2Izze', name = 'issuer'

    def createRegistry(self, pre, name):
        conf = dict(nonce='AGu8jwfkyvVXQ2nqEb5yVigEtR31KSytcpe2U2f7NArr')

>       registry, _ = self.registrar.incept(name=name, pre=pre, conf=conf)
E       TypeError: Registrar.incept() got an unexpected keyword argument 'name'

src/keria/testing/testing_helper.py:580: TypeError
_______________________________________________ test_credentialing_ends ________________________________________________

helpers = <class 'keria.testing.testing_helper.Helpers'>, seeder = <class 'keria.testing.testing_helper.DbSeed'>

    def test_credentialing_ends(helpers, seeder):
        salt = b'0123456789abcdef'

        with helpers.openKeria() as (agency, agent, app, client), \
                habbing.openHab(name="issuer", salt=salt, temp=True) as (hby, hab), \
                helpers.withIssuer(name="issuer", hby=hby) as issuer:
            idResEnd = aiding.IdentifierResourceEnd()
            credEnd = credentialing.CredentialCollectionEnd(idResEnd)
            app.add_route("/identifiers/{name}/credentials", credEnd)
            credResEnd = credentialing.CredentialQueryCollectionEnd()
            app.add_route("/identifiers/{name}/credentials/query", credResEnd)
            credResEnd = credentialing.CredentialResourceEnd(idResEnd)
            app.add_route("/identifiers/{name}/credentials/{said}", credResEnd)

            assert hab.pre == "EIqTaQiZw73plMOq8pqHTi9BDgDrrE7iE9v2XfN2Izze"

            seeder.seedSchema(hby.db)
            seeder.seedSchema(agent.hby.db)

            end = aiding.IdentifierCollectionEnd()
            app.add_route("/identifiers", end)
            op = helpers.createAid(client, "test", salt)
            aid = op["response"]
            issuee = aid['i']
            assert issuee == "EHgwVwQT15OJvilVvW57HE4w0-GPs_Stj2OFoAHZSysY"

            rgy = Regery(hby=hby, name="issuer", temp=True)
            registrar = Registrar(hby=hby, rgy=rgy, counselor=None)

            conf = dict(nonce='AGu8jwfkyvVXQ2nqEb5yVigEtR31KSytcpe2U2f7NArr')

>           registry, _ = registrar.incept(name="issuer", pre=hab.pre, conf=conf)
E           TypeError: Registrar.incept() got an unexpected keyword argument 'name'

tests/app/test_credentialing.py:295: TypeError
__________________________________________________ test_presentation ___________________________________________________

helpers = <class 'keria.testing.testing_helper.Helpers'>, seeder = <class 'keria.testing.testing_helper.DbSeed'>
mockHelpingNowUTC = None

    def test_presentation(helpers, seeder, mockHelpingNowUTC):
        salt = b'0123456789abcdef'

        with helpers.openKeria() as (agency, agent, app, client), \
                habbing.openHab(name="issuer", salt=salt, temp=True) as (hby, hab), \
                helpers.withIssuer(name="issuer", hby=hby) as issuer:

            presentationEnd = PresentationCollectionEnd()
            app.add_route("/identifiers/{name}/credentials/{said}/presentations", presentationEnd)

            end = aiding.IdentifierCollectionEnd()
            app.add_route("/identifiers", end)
            op = helpers.createAid(client, "test", salt)
            aid = op["response"]
            issuee = aid['i']
            assert issuee == "EHgwVwQT15OJvilVvW57HE4w0-GPs_Stj2OFoAHZSysY"

            seeder.seedSchema(hby.db)
            seeder.seedSchema(agent.hby.db)

            rgy = Regery(hby=hby, name="issuer", temp=True)
            registrar = Registrar(hby=hby, rgy=rgy, counselor=None)

            conf = dict(nonce='AGu8jwfkyvVXQ2nqEb5yVigEtR31KSytcpe2U2f7NArr')

>           registry, _ = registrar.incept(name="issuer", pre=hab.pre, conf=conf)
E           TypeError: Registrar.incept() got an unexpected keyword argument 'name'

tests/app/test_presenting.py:51: TypeError
=============================================== short test summary info ================================================
FAILED tests/app/test_basing.py::test_seeker - TypeError: Registrar.incept() got an unexpected keyword argument 'name'
FAILED tests/app/test_credentialing.py::test_credentialing_ends - TypeError: Registrar.incept() got an unexpected keyword argument 'name'
FAILED tests/app/test_presenting.py::test_presentation - TypeError: Registrar.incept() got an unexpected keyword argument 'name'
============================================ 3 failed, 40 passed in 26.59s =============================================
(
After this weekends push, yes. Sorry, it takes time to get functionality across all the repos at once.
nuttawut.kongsuwan
<@U03RLLP2CR5> Thank you so much for pointing it out
Is there any process through which the `gleif/keri` images on DockerHub get updated? The "latest" tag is 7 months old:
We are all on Discord now and no one is paying attention to Slack anymore.
:open_mouth:
<@U024CJMG22J> Link/invite to Discord?