Round 2 of training our LLM
Ron, our resident LLM expert, continued training our AI last night. ðŸ§
Our first attempt was prompting the LLM based off of one data submission. He then moved on to prompting the LLM based off of multiple sources and it worked really well!
Ron also threw up a quick react frontend and tested connecting the LLM to a front end. He hosted it on localhost:8000 to see if a json query and return a json payload with question, answer, sources, and quotes would work. Thankfully it did.
Next Steps
-
It's pretty exciting to see all the elements coming together. Next our front end dev will need to wire up their design and plug in the LLM.
-
FWIW, the initial source data for our LLM will be hosted in github and includes BIPS, some RFCs, Bitcoin wiki general info, and the white paper for now. Other sources we may consider are bitcoin optechs topics(https://bitcoinops.org/en/topics/)) and bitcoin.org's vocabulary (https://bitcoin.org/en/vocabulary)..)
-
We will also need to figure out if we can include the filter criteria of scoring data against the initial data set plus data source into the equation. We plan on hashing this out tomorrow on our call.