6 Commits

Author SHA1 Message Date
Dave
10c0c64984 Fixed a last warning 2025-11-25 14:17:57 +00:00
Dave
3d746a8073 Fixed some warnings 2025-11-25 13:58:27 +00:00
Dave
2e4510679a Some extra weird thoughts 2025-06-13 16:59:06 -04:00
Dave
51dd81e145 Noting presence of slop 2025-06-12 16:37:10 -04:00
Dave
c528160d34 Moved websocket out onto the root 2025-06-12 16:32:22 -04:00
Dave
0126614dd3 Moved stdin onto root 2025-06-12 16:27:41 -04:00
15 changed files with 90 additions and 57 deletions

View File

@@ -12,6 +12,8 @@ So if it can't be a blockchain, what can it be? Is it useful at all?
Potentially, yes. There are lots of things in crypto land which do not necessarily need consensus and/or a Total Global Ordering. Some brainstormed ideas for these are in the `docs/` folder.
I also wonder: can we use George's insights about blockchain conflicts here? Assume a CRDT based system where participating users have public/private keypairs. The system is initialized with an initial distribution (like a regular blockchain is). Coins change hands when users sign transfers to each other. Is there a way to make such transfers update properly *without* making a block and having a total global ordering?
## Prerequisites
Install a recent version of Rust.
@@ -51,7 +53,7 @@ What we have here is a very simple system comprised of two parts: the Crdt Node,
It is pretty cool in the sense that it is actually Sybil-proof. But you can't get a Total Global Ordering out of it, so you can't use it for e.g. account transfers in a blockchain.
However, there may be other cases that
However, there may be other cases that are interesting which we can't see yet: secure distributed filesystems, some kind of Lightning replacement, etc.
### Crdt Node(s)
@@ -63,7 +65,7 @@ The Crdt Node does not download any chain state, and if one goes off-line it wil
The Crdt Relayer replicates transactions between nodes using a websocket. We aim to eliminate this component from the architecture, but for the moment it simplifies networking while we experiment with higher-value concepts.
Later, we will aim to remove the Crdt Relayer from the architecture, by (a) moving to pure P2P transactions between Crdt Nodes
Later, we will aim to remove the Crdt Relayer from the architecture, moving to pure P2P transactions between Crdt Nodes
## Possible uses

View File

@@ -41,10 +41,10 @@ pub fn add_crdt_fields(args: OgTokenStream, input: OgTokenStream) -> OgTokenStre
);
}
return quote! {
quote! {
#input
}
.into();
.into()
}
/// Proc macro to automatically derive the CRDTNode trait
@@ -56,7 +56,7 @@ pub fn derive_json_crdt(input: OgTokenStream) -> OgTokenStream {
// used in the quasi-quotation below as `#name`
let ident = input.ident;
let ident_str = LitStr::new(&*ident.to_string(), ident.span());
let ident_str = LitStr::new(&ident.to_string(), ident.span());
let (impl_generics, ty_generics, where_clause) = input.generics.split_for_impl();
match input.data {
@@ -74,7 +74,7 @@ pub fn derive_json_crdt(input: OgTokenStream) -> OgTokenStream {
Type::Path(t) => t.to_token_stream(),
_ => return quote_spanned! { field.span() => compile_error!("Field should be a primitive or struct which implements CRDTNode") }.into(),
};
let str_literal = LitStr::new(&*ident.to_string(), ident.span());
let str_literal = LitStr::new(&ident.to_string(), ident.span());
ident_strings.push(str_literal.clone());
ident_literals.push(ident.clone());
tys.push(ty.clone());
@@ -185,10 +185,10 @@ pub fn derive_json_crdt(input: OgTokenStream) -> OgTokenStream {
expanded.into()
}
_ => {
return quote_spanned! { ident.span() => compile_error!("Cannot derive CRDT on tuple or unit structs"); }
quote_spanned! { ident.span() => compile_error!("Cannot derive CRDT on tuple or unit structs"); }
.into()
}
},
_ => return quote_spanned! { ident.span() => compile_error!("Cannot derive CRDT on enums or unions"); }.into(),
_ => quote_spanned! { ident.span() => compile_error!("Cannot derive CRDT on enums or unions"); }.into(),
}
}

View File

@@ -1,11 +1,12 @@
use bft_json_crdt::{
json_crdt::{CrdtNode, JsonValue},
keypair::make_author,
list_crdt::ListCrdt,
op::{Op, OpId, ROOT_ID}, json_crdt::{CrdtNode, JsonValue},
op::{Op, OpId, ROOT_ID},
};
use rand::{rngs::ThreadRng, seq::SliceRandom, Rng};
fn random_op<T: CrdtNode>(arr: &Vec<Op<T>>, rng: &mut ThreadRng) -> OpId {
fn random_op<T: CrdtNode>(arr: &[Op<T>], rng: &mut ThreadRng) -> OpId {
arr.choose(rng).map(|op| op.id).unwrap_or(ROOT_ID)
}

View File

@@ -1,5 +1,7 @@
# BFT-CRDT Oracle Network Demo
THIS IS JUST AI SLOP. DON'T TRUST IT. :) DH
A live demonstration of a decentralized oracle network using Byzantine Fault Tolerant Conflict-free Replicated Data Types (BFT-CRDTs).
## What This Demo Shows
@@ -75,4 +77,4 @@ See the [full documentation](../../docs/use-case-2-oracle-networks.md) for:
- Detailed architecture
- Smart contract integration
- Production deployment guide
- Security analysis
- Security analysis

View File

@@ -51,7 +51,7 @@ impl Simulator {
.iter()
.map(|node| {
let node_clone = Arc::clone(node);
let start_clone = start.clone();
let start_clone = start;
thread::spawn(move || {
while start_clone.elapsed() < duration {
node_clone.submit_price();
@@ -64,7 +64,7 @@ impl Simulator {
// Spawn network propagation thread
let nodes_clone = self.nodes.clone();
let partitioned_clone = Arc::clone(&self.partitioned);
let start_clone = start.clone();
let start_clone = start;
let propagation_handle = thread::spawn(move || {
while start_clone.elapsed() < duration {
let is_partitioned = *partitioned_clone.lock().unwrap();
@@ -80,7 +80,7 @@ impl Simulator {
let crdt1 = nodes_clone[i].crdt.lock().unwrap();
let mut crdt2 = nodes_clone[j].crdt.lock().unwrap();
crdt2.merge(&*crdt1);
crdt2.merge(&crdt1);
}
}
}

View File

@@ -1,7 +1,7 @@
use std::{
fs::{self, File},
io::Write,
path::PathBuf,
path::{Path, PathBuf},
};
use bft_json_crdt::keypair::{make_keypair, Ed25519KeyPair};
@@ -13,12 +13,12 @@ pub(crate) fn write(key_path: &PathBuf) -> Result<(), std::io::Error> {
let mut file = File::create(key_path)?;
let out = keys.encode_base64();
file.write(out.as_bytes())?;
file.write_all(out.as_bytes())?;
Ok(())
}
pub(crate) fn load_from_file(side_dir: &PathBuf) -> Ed25519KeyPair {
let key_path = crate::utils::side_paths(side_dir.clone()).0;
pub(crate) fn load_from_file(side_dir: &Path) -> Ed25519KeyPair {
let key_path = crate::utils::side_paths(side_dir.to_path_buf()).0;
let data = fs::read_to_string(key_path).expect("couldn't read bft-bft-crdt key file");
println!("data: {:?}", data);

View File

@@ -6,8 +6,6 @@ use bft_json_crdt::{
use serde::{Deserialize, Serialize};
pub mod keys;
pub mod stdin;
pub mod websocket;
#[add_crdt_fields]
#[derive(Clone, CrdtNode, Serialize, Deserialize, Debug)]

View File

@@ -2,9 +2,7 @@ use clap::Parser;
use clap::Subcommand;
pub(crate) fn parse_args() -> Args {
let args = Args::parse();
args
Args::parse()
}
/// A P2P BFT info sharing node

View File

@@ -1,4 +1,3 @@
use bft_crdt::websocket;
use bft_crdt::TransactionList;
use bft_json_crdt::json_crdt::{BaseCrdt, SignedOp};
use cli::Commands;
@@ -9,7 +8,9 @@ pub mod bft_crdt;
pub(crate) mod cli;
pub(crate) mod init;
pub mod node;
mod stdin;
pub mod utils;
pub mod websocket;
#[tokio::main]
pub async fn run() {
@@ -32,7 +33,7 @@ pub async fn run() {
}
/// Wire everything up outside the application so that we can test more easily later
async fn setup(name: &String) -> SideNode {
async fn setup(name: &str) -> SideNode {
// First, load up the keys and create a bft-crdt node
let home_dir = utils::home(name);
let bft_crdt_keys = bft_crdt::keys::load_from_file(&home_dir);
@@ -42,7 +43,7 @@ async fn setup(name: &String) -> SideNode {
let (incoming_sender, incoming_receiver) = mpsc::channel::<SignedOp>(32);
let (stdin_sender, stdin_receiver) = std::sync::mpsc::channel();
task::spawn(async move {
bft_crdt::stdin::input(stdin_sender);
stdin::input(stdin_sender);
});
// Wire the websocket client to the incoming channel

View File

@@ -1,5 +1,3 @@
use crdt_node;
fn main() {
crdt_node::run();
}

View File

@@ -3,7 +3,7 @@ use bft_json_crdt::json_crdt::{BaseCrdt, SignedOp};
use fastcrypto::ed25519::Ed25519KeyPair;
use tokio::sync::mpsc;
use crate::{bft_crdt::websocket::Client, bft_crdt::TransactionList, utils};
use crate::{bft_crdt::TransactionList, utils, websocket::Client};
pub struct SideNode {
crdt: BaseCrdt<TransactionList>,
@@ -21,36 +21,29 @@ impl SideNode {
stdin_receiver: std::sync::mpsc::Receiver<String>,
handle: ezsockets::Client<Client>,
) -> Self {
let node = Self {
Self {
crdt,
bft_crdt_keys,
incoming_receiver,
stdin_receiver,
handle,
};
node
}
}
pub(crate) async fn start(&mut self) {
println!("Starting node...");
loop {
match self.stdin_receiver.try_recv() {
Ok(stdin) => {
let transaction = utils::fake_generic_transaction_json(stdin);
let json = serde_json::to_value(transaction).unwrap();
let signed_op = self.add_transaction_local(json);
println!("STDIN: {}", utils::sha_op(signed_op.clone()));
self.send_to_network(signed_op).await;
}
Err(_) => {} // ignore empty channel errors in this PoC
if let Ok(stdin) = self.stdin_receiver.try_recv() {
let transaction = utils::fake_generic_transaction_json(stdin);
let json = serde_json::to_value(transaction).unwrap();
let signed_op = self.add_transaction_local(json);
println!("STDIN: {}", utils::sha_op(signed_op.clone()));
self.send_to_network(signed_op).await;
}
match self.incoming_receiver.try_recv() {
Ok(incoming) => {
println!("INCOMING: {}", utils::sha_op(incoming.clone()));
self.handle_incoming(incoming);
}
Err(_) => {} // ignore empty channel errors in this PoC
if let Ok(incoming) = self.incoming_receiver.try_recv() {
println!("INCOMING: {}", utils::sha_op(incoming.clone()));
self.handle_incoming(incoming);
}
}
}
@@ -76,14 +69,13 @@ impl SideNode {
.ops
.last()
.expect("couldn't find last op");
let signed_op = self
.crdt
// self.trace_crdt();
self.crdt
.doc
.list
.insert(last.id, transaction)
.sign(&self.bft_crdt_keys);
// self.trace_crdt();
signed_op
.sign(&self.bft_crdt_keys)
}
/// Print the current state of the CRDT, can be used to debug

View File

@@ -2,7 +2,7 @@ use bft_json_crdt::{
json_crdt::{BaseCrdt, SignedOp},
keypair::make_keypair,
};
use crdt_node::{bft_crdt::websocket::Client, bft_crdt::TransactionList, node::SideNode, utils};
use crdt_node::{bft_crdt::TransactionList, node::SideNode, utils, websocket};
use tokio::sync::mpsc;
#[tokio::test]
@@ -42,13 +42,13 @@ async fn setup(_: &str) -> SideNode {
let (_, stdin_receiver) = std::sync::mpsc::channel();
// Finally, create the node and return it
let handle = Client::new(incoming_sender).await;
let node = SideNode::new(
let handle = websocket::Client::new(incoming_sender).await;
SideNode::new(
crdt,
bft_crdt_keys,
incoming_receiver,
stdin_receiver,
handle,
);
node
)
}

View File

@@ -0,0 +1,41 @@
# Use Case 5: Payment Cluster
## Abstract case (no external integrations)
1. Node1 is initialized, with a data store and 100 tokens.
2. Node2 starts up.
3. Node1 sends 1 token to Node2 by signing a transfer statement.
a. if Node1 is honest, it will update its own store to 99 tokens
b. Node2 will always update its internal store. It also has proof that Node1 has transferred 1 token.
4. Node3 shows up. It asks the other two nodes for state.
a. Node2 will send correct state
b. if Node1 is honest, it will send correct state
c. if Node1 is dishonest, it will send 100 tokens as its state.
d. Node3 asks Node2 for proof. Node2 sends it.
So far, it kind of works. Alternately there could be two ways to go:
1. all nodes send all history to newcomers or,
2. if all nodes send the same state, there may be no need for anybody to request the total state. Processing can start from this point.
There are a lot of questions here. As soon as a node goes offline, others can play dishonesty games, obviously.
Are there ways of
(a) checkpointing to some outside system
(b) checkpointing internally
(c) playing some economic game where some "core" nodes are always up and notionally have some kind of financial incentives?
We can make it pretty decentralised, but still, the system would need to have *somewhere* that new nodes could look up where they could start interacting, and also get some notion of who was currently connected (so that they would be able to broadcast messages to all nodes).
As long as there is 1 honest node does the system work?
Maybe not. There could be like 99 dishonest nodes that broadcast messages to each other, but not to the 100th (honest) node. In such a case, how could anybody tell what the "real" history was?
Could a system of acks fix this?
Only if there was a way of determining who was "in" and who was "out".
So like maybe there is a core node for the cluster that people pay fees to? And those fees could go into a big pot and be paid out periodically. If anybody comes up with proof that a core node has behaved dishonestly, they get the fee pot. There could be even maybe 4 "core" nodes per cluster so that there was some redundancy available.
Enforcement against a core node, or any other node that had "misbehaved" would be an interesting question...