1.1 Purpose

ASP Specification V1.0

This document serves to:

  1. Provide a comprehensive integration guide for protocols, end-users and compliance systems.
  2. Establish a standardized framework for building and operating an ASP.
  3. Define the technical specifications and interfaces for each component of the ASP architecture.
  4. Outline best practices for ensuring security, scalability, and privacy in ASP implementations.
The Association-Set Provider (ASP) Specification V1.0 devloped by 0xBow.io defines a standardized framework for implementing and operating or integrating with the ASP system

The ASP is designed to support compliance mechanisms for blockchain protocols, enabling the verification of compliance with regulatory requirements and business rules.

It aims to enable privacy-preserving compliance for blockchain protocols such as Privacy Pool, by leveraging zero-knowledge proofs (ZKPs) and efficient data categorization techniques.

Current WIPs

This document is still a work in progress.

Here is curent TODO list:

## Content Overview:

See below to find the section that best suits your needs.

1.2 Scope

This specification encompasses:

  • ASP system architecture and component interactions
  • Protocol integration requirements and methodologies
  • Feature extraction and classification processes
  • Public registry smart contract specifications
  • Zero-knowledge proof generation and verification
  • End-User integration guidelines
  • Compliance policy definition and enforcement
  • Scalability and performance optimization strategies
  • Security considerations and audit mechanisms
  • Governance and upgrade processes

Out of scope

Out of scope for this document:

  • Detailed implementations of specific blockchain protocols
  • Comprehensive ZKP circuit designs
  • Exhaustive security audits of implementing systems

1.3 Audience

This document is intended for:

Professionals responsible for implementing and integrating the ASP system with existing blockchain protocols.

Cryptographers and researchers working on zero-knowledge proof systems and privacy-preserving technologies.

Teams developing blockchain protocols that aim to incorporate compliance mechanisms.

Individuals responsible for defining and enforcing compliance policies for blockchain protocols.

Professionals designing large-scale blockchain infrastructures.

1.4 Document Conventions

This document adheres to the following conventions:

[1] Mathematical Expressions:

All mathematical formulas are rendered with KaTeX

[2] Code Snippets:

Code examples are provided in syntax-highlighted blocks.

pragma solidity ^0.8.0;

contract PublicRegistry {
    // Contract code here
}

[2] Diagrams:

System diagrams and flowcharts are presented using either Mermaid.js or D2 syntax . For example:

graph LR
    A[Off-Chain Components] --> B[On-Chain Components]
    B --> C[External Systems]
%%{
  init: {
    'theme': 'base',
    'themeVariables': {
      'primaryColor': '#1e1e2e',
      'primaryTextColor': '#cdd6f4',
      'primaryBorderColor': '#89b4fa',
      'lineColor': '#fab387',
      'secondaryColor': '#f9e2af',
      'tertiaryColor': '#a6e3a1',
      'noteTextColor': '#1e1e2e',
      'noteBkgColor': '#f5c2e7',
      'notesBorderColor': '#cba6f7',
      'textColor': '#cdd6f4',
      'fontSize': '16px',
      'labelTextColor': '#1e1e2e',
      'actorBorder': '#89b4fa',
      'actorBkg': '#1e1e2e',
      'actorTextColor': '#cdd6f4',
      'actorLineColor': '#89b4fa',
      'signalColor': '#cdd6f4',
      'signalTextColor': '#1e1e2e',
      'loopTextColor': '#cdd6f4',
      'activationBorderColor': '#f5c2e7',
      'activationBkgColor': '#1e1e2e',
      'sequenceNumberColor': '#1e1e2e'
    }
  }
}%%

graph LR
    A[Off-Chain Components] --> B[On-Chain Components]
    B --> C[External Systems]

[3] Terminology:

Technical terms specific to the ASP system are defined in the glossary Appendix 13.1 and are italicized upon first use in each section.

[4] References:

Citations to external documents or standards are provided in square brackets and listed in the References section Appendix 13.2.

[5] Notes and Warnings:

Important information is highlighted in note blocks via mdbook-admonish plugin.

Note

Note: Critical implementation details are emphasized in such blocks.

Warning

Warning: Potential risks or security concerns are highlighted in warning blocks.

### \[5] Version Information:

Any version-specific information is clearly marked with the applicable version number.

2.1 ASP Architecture

Fig.2.1. High Level Architecture of the ASP SystemASP SystemService StackOn-Chain InstancesObserverCategory-EnginePublic RegistryZKP VerifierwatcherState-Transition DetectorRecord-GeneratorFeature-ExtractorClassifierCategorizer  
















The core idea of Privacy Pools is this:

Instead of merely proving that their withdrawal is linked to some previously-made deposit, a user proves membership in a more restrictive association set. This association set could be the full subset of previously-made deposits, a set consisting only of the user’s own deposits, or anything in between …

Users will subscribe to intermediaries, called association set providers (ASPs), which generate association sets that have certain properties“ 1

The 0xBow ASP system is an implementation of the ASP concept initially introduced for Privacy Pool but now exteded to facilitate compliance mechanisms across multiple blockchain protocols.

It’s architecture is relatively simple and consists of two main components:

  • Service Stack: 2 modular services that are working in concert to monitor, classify, and verify state transitions.
  • On-Chain Instances: Components supporting onchain integrations with the ASP.
1
["Blockchain privacy and regulatory compliance: Towards a practical equilibrium"](https://www.sciencedirect.com/science/article/pii/S2096720923000519),
Vitalik. B, Soleimani. A, et al., 2023

2.2 Key Components

2.2.1 Observer:

Fig.2.2. observation flowProtocolObservation Pipeliness'watcherState-Transition DetectorState BufferRecord-GeneratorΔS(s,s’)→δH(s,e)→h  T(s,i)→s’[e, s'][e,s,s']  s[e,s,s']s'



















The Observer is a service that monitors & records the state-changes of specific protocols in real-time. It is comprised of the following modules:

  1. Watcher: Watches the network for signals (e.g. event emissions) by protocols that indicate a state-change has occured.

    It requires protocol-specific components ( adapter & parser ) in order to interface with the protocol at the network level.

    It is possible to have a 1 to 1 implementation of the watcher module for each protocol, or a single watcher module that can be configured to monitor multiple protocols via pluggable adapters & parsers.

  2. State Transition Detector: Identifies and validates the state transitions signalled by the watcher module.

    The Detector is notified of the state-change by the watcher module. In response, it attemps to rebuild a representation of the state from data cached in the state buffer.

    The state buffer is a ring-buffer for efficient caching of new states and quick retrieval of old states.

    And then compares the current state with the previous state usung the protocol’s state comparator function:

    Where indicates a state transition.

  3. Record Generator: Creates cryptographic records of state transitions, defined as the tuple:

    Where:

    • is the unique identifier of the protocol instance
    • is the reference to the state transition event (i.e. the block number, transaction hash)
    • is the pre-state hash
    • is the post-state hash
    • Where & is computed by a state-hash function:

2.2.2 Categorization Engine:

Fig.2.3. Categorization FlowObserverCategorization PipelineR QueueB queueFeature-Extractorsclassifierscategorizers  [categories][features]RRB















The Categorization Engine is crucial to the ASP’s ability in supporting compliance mechanisms & allow end-users to generate “Associaiton Sets”.

The objective of the categorization is to correctly identify attributes or properties (expressed as categories) of the state-transition event which are relevant to the compliance requirements specified by the protocol (or other entities)

Visit the classification & categorization and feature extraction sections for more details on the categorization process.

The Categorization Engine executes a FIFO pipeline of feature-extraction, classification & categorization algorithms to categorise the state-transition event referenced by the record .

The output is a 256-bit vector termed “category bitmap” or , where each bit represents a specific category.

The Category Pipeline is the sequential execution of:

  1. Feature Extractors: Analyzes records to extract relevant features for classification.

  2. Classifiers: Categorizes records based on extracted feature sets.

  3. Categorizers: Generates a 256-bit category bitmap to reflect the record’s category/s

2.2.3 On-Chain Instances:

  1. Public Registry: Set of on-chain smart contracts for storing & querying ASP onchain data.

More details can be found in the public registry section.

The onchain public registry supports on-chain intergration with the ASP, allowing for the following:

  • Generation of Association Sets whilst preserving end-user privacy.
  • Direct integration with protocol contracts for compliance verification.
  • Serve public input directly to onchain verifies during a transaction that requires compliance verification.
  1. ZKP Verifier: OnChain component that verifies zero-knowledge proofs of compliance.

See Zero-Knowledge Proofs section for more details

2.3 Data Flow

  1. State Transition Detection:

    • Watcher observes registered protocols for protocol interactions.
    • State changes is detected by the State Transition Detector.
  2. Record Generation and Classification:

    • Record Generator creates a cryptographic record of the state transition.
    • Feature Extractor processes to extract relevant features.
    • Classifier categorises based on extracted features.
    • Categorizer creates a 256-bit category bitmap for .
  3. On-Chain Storage:

    • The category bitmap and the associated record hash are stored on-chain in the Public Registry.
  4. Querying:

    • External systems can query the Public Registry to retrieve subsets of record hashes based on scope, category or feature criteria.
  5. Privacy-Preserving Compliance Verification:

    • An entity requests a compliance proof for a set of records: .
    • ZKP Generator computes a proof proving where predicate is the given compliance policy expressed as a catgory bitmap.

2.4 Security Model

Important

A comprehensive security analysis and formal proofs are beyond the scope of this overview and are addressed in subsequent sections.

Principles

The ASP security model is based on the following principles:

  1. Immutability: The Public Registry is append-only and immutable, ensuring the integrity of stored records.

  2. Privacy: Zero-knowledge proofs enable compliance verification without revealing sensitive details.

  3. Decentralization: The system operates across multiple protocols and doesn't rely on a single point of trust.

  4. Access Control: Strict policies to prevent unauthorized modifications to the Public Registry.

  5. Cryptographic Integrity: All records and proofs are cryptographically secured.

**The security of the ASP system relies on the following assumptions:**
  1. The underlying blockchain protocol’s security (e.g., Ethereum’s consensus mechanism).
  2. The cryptographic security of the hash functions used (e.g., Keccak-256, Poseidon).
  3. The soundness and zero-knowledge properties of the ZKP system employed.

Key security considerations include:

  1. Sybil Resistance: All system components must be resistant to Sybil attacks.

  2. Front-Running Protection: Measures to prevent front-running of compliance proofs.

  3. Privacy Leakage: Careful design & implementation of Interfaces & communication channels to prevent inadvertent privacy leaks through query patterns.

  4. Upgrade Security: Secure processes for updating classification rules and compliance policies.

3.1 Protocol Requirements

For a protocol to integrate with the ASP system, it must satisfy the following requirements:

  1. State Machine Representation

    The protocol must be representable as a state machine with the following tuple:

Where:

  • : Hash function to compute unique identifier of the protocol instance
    • Implementation of a protocol in a specific chain (i.e. Contract deployed in ethereum) is an instance of a protocol
    • The Scope function computes the unique identifier of the protocol instance
    • I.e. Keccak256 Hash of (address, chainID, contractCode)
  • : State space
    • The agreed upon state space of the protocol
  • : Input dictionary
    • The agreed upon inputs that can trigger a state transition in the protocol
  • : Output dictionary
    • The agreed upon outputs that are returned by the state transition function of the protocol
  • : Transition function,
    • s = the agreed upon prestate satisfying S,
    • s’ = the post state satisfying S,
    • i = the state transition inputs satisfying I
  • : State hash function,
    • = the state of the protocol
    • = reference to the state-transition event which can be a singular value or a tuple of values (i.e. block number, transaction number, log number)
    • = the hash of the state and must be a unique identifier of the state at
  • : Verification function,
    • A function that can verify that is the pre-image of at satisfying the transition function
  • : State comparator function,
    • A function that can compare two states and return the difference between them.
    • indicates a state transition
  1. Deterministic State Transitions: The protocol must have deterministic state transitions to ensure consistent record generation.

  2. Observable State: The protocol must expose sufficient information to reconstruct its state at any given epoch.

  3. Unique Identifiers: Each protocol instance must have a unique identifier computable by the function.

  4. Event Emission: The protocol should emit events for all state-changing operations to facilitate efficient monitoring.

3.2 State Transition Monitoring

Note

To register a protocol, please view the steps outlined in the "For Onchain Protocols" section.

State transition monitoring involves observing the blockchain for relevant events and state changes in integrated protocols.
---
title: "Figure 3.1: State Transition Monitoring Sequence Diagram"
---

%%{
  init: {
    'theme': 'base',
    'themeVariables': {
      'primaryColor': '#1e1e2e',
      'primaryTextColor': '#cdd6f4',
      'primaryBorderColor': '#89b4fa',
      'lineColor': '#fab387',
      'secondaryColor': '#181825',
      'tertiaryColor': '#1e1e2e',
      "clusterBorder": "#f2cdcd",
      'noteTextColor': '#f5e0dc',
      'noteBkgColor': '#f5c2e7',
      'notesBorderColor': '#cba6f7',
      'textColor': '#f5e0dc',
      'fontSize': '16px',
      'labelTextColor': '#f5e0dc',
      'actorBorder': '#89b4fa',
      'actorBkg': '#1e1e2e',
      'actorTextColor': '#f5e0dc',
      'actorLineColor': '#89b4fa',
      'signalColor': '#cdd6f4',
      'signalTextColor': '#f5e0dc',
      'messageTextColor': '#f5e0dc',
      'messageLine0TextColor': '#f5e0dc',
      'messageLine1TextColor': '#f5e0dc',
      'loopTextColor': '#f5e0dc',
      'activationBorderColor': '#f5c2e7',
      'activationBkgColor': '#1e1e2e',
      'sequenceNumberColor': '#1e1e2e'
    }
  }
}%%

sequenceDiagram
    participant BC as Protocol
    participant PM as Watcher
    participant STD as State Transition Detector
    participant RG as Record Generator

    BC->>PM: Event signal (e)
    PM->>STD: Forward Event signal (e)
    STD->>STD: Reconstruct States (s, s')
    STD->>STD: Compare States (ΔS)
    alt δ ≠ 0
        STD->>RG: Detected State-Transition (e, s, s')
        RG->>RG: Hash(s') for h'
        RG->>RG: Hash(s) for h
        alt h' ≠ h
            RG->>RG: Record State-Transition (e, s, s')
        else h'= h
            RG->>RG: Discard
        end
    else δ = 0
      STD->>STD: Discard
    end

The process follows these steps:

  1. Protocol Registration: Protocols are registered with the ASP system, providing their function and event signatures.

  2. Event Listening: The Watcher subscribes to signals (event emission) from registered protocols that indicates a state change and forwards the event signal to the State Transition Detector.

  3. State Reconstruction:

    Important

    It is expensive & inefficient for the Observer to reconstruct or store the entire state of the protocol.

    and are only state representations / proofs which carry enough information to verify a state-transition with comparator function .

    For example: could be a merkle-proof of a state-root, and could be the new state-root.

    With a well-defined state-space and state transition-function , and/or is reconstructed from data carried by and the cached pre-image read from the state-buffer.
  4. State Comparison: The function is applied to determine if a meaningful state transition has occurred:

    Where indicates a state transition.

  5. Trigger Record Generation: Tuple is sent to the Record Generator to compose a cryptographic record of the state transition if .

  6. Record Generation: The Record Generator hashes the new state and the previous state to create a record of the state transition:

    Where and are the hashes of the previous and new states and .

3.3 Record Generation

A Record () is a data structure that captures the state transition of a protocol instance and can be represented as a tuple:

Where:

  • is the unique identifier of the protocol instance
  • is the reference to the state transition event (i.e. the block number, transaction hash, log index)
  • is the pre-state hash
  • is the post-state hash

Records exsists as serialized binary-object. The construction of the Recrod object is performed by the Record Generator, a component of the ASP system.

The Record Generator performs the following steps:

  1. Compute the using the protocol’s function if not allready provided.
  2. Compute and using the protocol’s state hash function .
  3. Assemble the Record object.
  4. Compute the Record hash .

Example go-implementation of :


package record

// 32-byte Scope
// 32-byte Tx Hash
// uint Log Index
// 32-byte pre-state hash
// 32-byte post-state hash
type RecordT [130]byte

type Record interface {
	Hash() [32]byte
	Scope() [32]byte
	TxHash() [32]byte
	LogIdx() uint
	PreState() [32]byte
	PostState() [32]byte
}

var _ Record = (*RecordT)(nil)

func (r RecordT) Hash() [32]byte {
	return HashAlgorithm(r)
}

func (r RecordT) Scope() (scope [32]byte) {
	copy(scope[:], r[:32])
	return
}

func (r RecordT) TxHash() (txHash [32]byte) {
	copy(txHash[:], r[32:64])
	return
}

func (r RecordT) LogIndex() (logIndex uint) {
	return uint(r[64])
}

func (r RecordT) PreState() (preStateHash [32]byte) {
	copy(preStateHash[:], r[65:97])
	return
}

func (r RecordT) PostState() (postStateHash [32]byte) {
	copy(postStateHash[:], r[97:129])
	return
}

3.4 Integration Best Practices

Tip

To see all possible integrations with the ASP see this section.

When integrating a protocol with the ASP system, consider the following best practices:
  1. Efficient State Representation: Design the state space to be as compact as possible while still capturing all relevant information.

  2. Granular Events: Emit fine-grained events for state changes to allow precise monitoring and record generation.

  3. Optimized Hash Functions: Implement efficient hash functions for and to minimize computational overhead.

    Example of a function in Solidity:

    function computeScope() public view returns (bytes32) {
        return keccak256(abi.encodePacked(
            address(this),
            block.chainid,
            _VERSION
        ));
    }
    
  4. Versioning: Include protocol version information in the to handle protocol upgrades gracefully.

  5. Gas Optimization: For on-chain components, optimize gas usage in event emission and state transitions.

  6. Privacy Considerations: Ensure that emitted events and exposed state do not leak sensitive information.

  7. Deterministic Implementations: Guarantee deterministic behavior in all protocol functions to ensure consistent record generation across different nodes.

  8. Cross-Chain Compatibility: For protocols operating across multiple chains, ensure the function incorporates chain identifiers.

  9. Testnet Integration: Always test ASP integration on testnets before deploying to mainnet.

  10. Documentation: Provide comprehensive documentation of the protocol’s state space, transition functions, and event structures to facilitate seamless integration.

4.1 Feature Extractor Interface

Feature-extraction refers to the process of transforming raw-data into numerical features that can be processed whilst preserving the information in the original data set.

Feature Extraction is category driven whereby features are properties, attributes, or characteristics of a Record that are meaningfuly grouped together to form a category:

  • Examples Features:

    • Accounts associated with the Record beloging to a certain list, i.e. wallet address that executed the transaction:
      • Feature: ACCOUNT_BLACKLISTED
    • Asset exposure to certain sources or activities such as Gambling, Money Laundering, etc.
      • Feature: ASSET_EXPOSURE_MONEYLAUNDERING
    • Volume of Assets transferred by a particular list of accounts
      • Feature: ACCOUNTLIST_ASSET_TRANSFER_VOLUME
    • Time of the day when the transaction was executed
      • Feature: TRANSACTION_TIME
    • If a certain internal call went against a specific Policy
      • Feature: POLICY_0X1A0B_VIOLATION
  • Example Categories:

    • UNAPPROVED category that groups the following features:
      • POLICY_0X1A0B_VIOLATION
      • ACCOUNTLIST_ASSET_TRANSFER_VOLUME
    • APPROVED category that groups the following features:
      • ACCOUNT_BLACKLISTED
      • ASSET_EXPOSURE_MONYLAUNDERING
    • SUSPICIOUS category that groups the following features:
      • ACCOUNT_BLACKLISTED
      • ASSET_EXPOSURE_MONEYLAUNDERING

The Feature Extractor is responsible for extracting these features from the Record and delivering them in a structured format. It’s interface should be minimalistic and easy to implement.

Below is an example interface for the Feature Extractor:

package extractor

/// FeatureExtractorMetadata
/// provides metadata about the Feature Extractor
type FeatureExtractorMetadata interface {
	getName() string
	getDescription() string
	getVersion() string
	getAuthor() string
	getLicense() string
	getURL() string
	// getFeatureSchema returns
	// the serialized JSON schema for the Feature
	getFeatureSchema() ([]byte)
	getPluginList() ([]string)
}

type FeatureExtractor interface {
	FeatureExtractorMetadata
	// extractFeatures extracts
	// features from the given Record
	extractFeatures([]byte) ([]byte)
}

The extractFeatures function is the core method which processes a given Record object and returns data required for the classification process.

4.2 Feature Types and Formats

Valuable and meaningful classification depends on the feature-extractor’s ability to extract the required features. To assist with this, the ASP system should define a set of features that are relevant to the classification process.

Features should be represented in a standardized & structured and verfiable format to ensure compatibility across different components of the ASP system & facilitate interoperability with external systems.

4.2.1 JSON Schema for Feature Definition

Category Feature Schema is a JSON Schema document that defines the features of a category, i.e:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "$id": "tag:0xbow.io,2024:categories:AML_COMPLIANT",
  "title": "Record is AML Compliant",
  "type": "object",
  "properties": {
    "features": {
      "type": "object",
      "properties": {
        "OFAC_LIST_MEMBERSHIP": {
          "$id": "tag:0xbow.io,2024:categories:AML_COMPLIANT:features:OFAC_LIST_MEMBERSHIP",
          "type": "boolean",
          "default": "true"
        },
        "FATF_LIST_MEMBERSHIP": {
          "$id": "tag:0xbow.io,2024:categories:AML_COMPLIANT:features:FATF_LIST_MEMBERSHIP",
          "type": "boolean",
          "default": "true"
        },
        "TRANSACTION_AMOUNT": {
          "$id": "tag:0xbow.io,2024:categories:AML_COMPLIANT:features:TRANSACTION_AMOUNT",
          "type": "integer",
          "default": "1000"
        }
      },
      "required": [
        "OFAC_LIST_MEMBERSHIP",
        "FATF_LIST_MEMBERSHIP",
        "TRANSACTION_AMOUNT"
      ]
    }
  },
  "required": ["features"]
}

In the above example, the AML_COMPLIANT category has three features:

  • OFAC_LIST_MEMBERSHIP
  • FATF_LIST_MEMBERSHIP
  • TRANSACTION_AMOUNT

This schema can be translated into a data structure which can be validated against the schema, i.e.:

// As a struct
// with custom tags
type AML_COMPLIANT_CATEGORY truct {
	_ struct{} `id:"0xbow.io,2024"`
	_ struct{} `category:"AML_COMPLIANT"`
	OFAC_LIST_MEMBERSHIP boolean `feature:"OFAC_LIST_MEMBERSHIP" default:"true"`
	FATF_LIST_MEMBERSHIP boolean `feature:"FATF_LIST_MEMBERSHIP" default:"true"`
	TRANSACTION_AMOUNT integer `feature:"TRANSACTION_AMOUNT" default:"1000"`
}

4.2.1.1 Generating Category Feature Schema:

JSON Schema’s like the one above can be Generated by a schema generator. Below is an example walkthrough in creating a schema generater in go using the swaggest/jsonschema-go package:

(a) Declaring the interfaces for Features:

package feature

import (
	"github.com/swaggest/jsonschema-go"
)

type FeatureType struct {
	*jsonschema.Type
}

type Feature interface {
	T() FeatureType
	String() string
	Attributes() []interface{}
	Schema(idPrefix string) *jsonschema.Schema
}

type FeatureAttribute interface {
	String() string
	TagType() string
	Tag(string) string
}


(b) Declaring Feature Schema for a Category:

package amlCompliantCat

import (
	"fmt"

	. "github.com/0xbow-io/asp-spec-V1.0/pkg/feature"
	"github.com/swaggest/jsonschema-go"
)

type _Feature uint

const (
	OFAC_LIST_MEMBERSHIP _Feature = iota
	FATF_LIST_MEMBERSHIP
	TRANSACTION_AMOUNT
)

var _ Feature = (*_Feature)(nil)


func (f _Feature) T() FeatureType {
	switch f {
	case OFAC_LIST_MEMBERSHIP:
		return FeatureType{
			Type: new(jsonschema.Type).WithSimpleTypes(jsonschema.Boolean),
		}
	case FATF_LIST_MEMBERSHIP:
		return FeatureType{
			Type: new(jsonschema.Type).WithSimpleTypes(jsonschema.Boolean),
		}
	case TRANSACTION_AMOUNT:
		return FeatureType{
			Type: new(jsonschema.Type).WithSimpleTypes(jsonschema.Number),
		}
	}
	return FeatureType{Type: new(jsonschema.Type)}
}


func (f _Feature) Feature() Feature {
	return &f
}

func (f _Feature) String() string {
	return [...]string{
		"OFAC_LIST_MEMBERSHIP",
		"FATF_LIST_MEMBERSHIP",
		"TRANSACTION_AMOUNT",
	}[f]
}

func (f _Feature) Attributes() []interface{} {
	return [...][]interface{}{
		// OFAC_LIST_MEMBERSHIP
		{
			required: "true",
			_default: true,
		},
		// FATF_LIST_MEMBERSHIP
		{
			required: "true",
			_default: true,
		},
		// "TRANSACTION_AMOUNT"
		{
			required: "true",
			_default: true,
		},
	}[f]
}

func (f _Feature) Schema(idPrefix string) (schema *jsonschema.Schema) {
	id := fmt.Sprintf("%s:features:%s", idPrefix, f.String())
	schema = &jsonschema.Schema{
		ID:   &id,
		Type: f.T().Type,
	}

	_attributes := f.Attributes()
	schema.WithDefault(_attributes[_default])
	schema.WithExamples(_attributes[examples])
	schema.WithPattern(_attributes[pattern].(string))
	schema.WithMaximum(_attributes[maximum].(float64))
	schema.WithExclusiveMaximum(_attributes[exclusiveMaximum].(float64))
	schema.WithMinimum(_attributes[minimum].(float64))

	return
}


(c) Schema Generator:

package amlCompliantCat

import (
	"encoding/json"

	. "github.com/0xbow-io/asp-spec-V1.0/pkg/feature"
	"github.com/swaggest/jsonschema-go"
)

type CategorySchema struct {
	*jsonschema.Schema
}

func (s *CategorySchema) MarshalJSON() ([]byte, error) {
	return json.MarshalIndent(s.Schema, "", " ")
}

func (s *CategorySchema) applyFeatures(features []Feature) {
	for i, feature := range features {
		// add the feature schema to the array
		s.Schema.Properties["features"].TypeObject.Properties[feature.String()] = jsonschema.SchemaOrBool{
			TypeObject: feature.Schema(*s.Schema.ID),
		}
		s.Schema.Properties["features"].TypeObject.Required[i] = feature.String()
	}
}

func (s *CategorySchema) Generate(
	label,
	title string,
	features []Feature) CategorySchema {s
	id := fmt.Sprintf("0xbow.io,2024:categories:%s", label)
	s = &CategorySchema{
		Schema: &jsonschema.Schema{
			ID:    &id,
			Title: &title,
			Properties: map[string]jsonschema.SchemaOrBool{
				// category - feature schema
				"features": {
					TypeObject: &jsonschema.Schema{
						Required:   make([]string, len(features)),
						Type:       new(jsonschema.Type).WithSimpleTypes(jsonschema.Object),
						Properties: make(map[string]jsonschema.SchemaOrBool),
					},
				},
			},
		},
	}
	defer s.applyFeatures(features)
	return *s
}

(d) Defining the Schema for AML_COMPLIANT category:

package amlCompliantCat

import (
	. "github.com/0xbow-io/asp-spec-V1.0/pkg/category"
	. "github.com/0xbow-io/asp-spec-V1.0/pkg/feature"
	"github.com/swaggest/jsonschema-go"
)

type _CategorySchema interface {
	MarshalJSON() ([]byte, error)
	Generate(label, title string, features []jsonschema.Field) *CategorySchema
}


var AML_COMPLIANT = new(CategorySchema).Generate(
	// Label
	"AML_COMPLIANT",
	// title
	"Record is AML Compliant",
	// Required Features
	[]Feature{
		Feature(OFAC_LIST_MEMBERSHIP),
		Feature(FATF_LIST_MEMBERSHIP),
		Feature(TRANSACTION_AMOUNT),
	})

4.3 Implementing Feature Extractors

Tip

Feature extraction can delegate / leverage specialized services via plugins, i.e:

  • Chainalysis can extract AML-Compliance related features.
  • An adpater for Chainalaysis API can be implemented as a plugin.
  • Utility of this plugin allow feature extraction to leverage Chainalysis services.

Speialized feature-extraction logic / algorithms can be implemented as gadgets, i.e:

  • EVM-Call-Tracer gadget for fast-tracing of internal calls.
  • Token-Tracker gadget for tracking all token movements.
### 4.3.1 Considerations:

As Faults in feature extraction can have downstream impact in the classification process, implementation of a feature extractor must take in account the complexity of the feature extraction process to ensure that:

  • The process is reliable and deterministc to avoid errors.
  • THe process is verifiable to ensure that the extracted features are correct.
  • The process is designed to be as fast as possible to avoid delays in the classification process.
  • The process is designed to be as efficient as possible to avoid unnecessary resource consumption.
  • The process is designed to be as secure as possible to avoid data leaks.
  • The process is designed to be as scalable as possible to handle large volumes of data.

4.3.2 Inputs & Outputs:

To elimniate unwanted influence by other components, the feature-extractor is designed to:

  • Accept only the Record as input.
  • Be integrated with access-control systems with policies to restrict access to the feature-extraction configuration or runtime environment.
  • Log any incoming data packets from external sources to ensure data integrity.

Outputs of the feature extraction process is serialized data that is encoded in a verifiable format. The extractor should sign it’s output to ensure that the data is not tampered with during transit.

The output should not be a stateful object such as a document or a database record consisting of the extracted features. Instead it is a set of verfiable JSON Patch operations that can be applied to a known & verified state to derive the extracted features, i.e.:

[
  {
    "op": 1,
    "root": "0x010010",
    "path": "HIGH_RISK/DIRECT_SANCTIONED_ENTITY_EXPOSURE",
    "value": "0x0f001010",
    "merkle-proof": {}
  },
  {
    "op": 2,
    "root": "0x010010",
    "path": "HIGH_RISK/INDIRECT_SANCTIONED_ENTITY_EXPOSURE",
    "value": "0x0f001010",
    "merkle-proof": {}
  }
]

These operations is applied to a default feature-document which is a document containing the features for a category (per Schema) but with default values (encrypted or encoded values).

This document is represented as a merkle-tree where then merkle-proofs is used to verify data integrity.

4.3.3 Plugins & Gadgets:

The feature extraction process should be modularized to allow for easy extension, testing and maintenance. Interfaces to external systems (i.e. Chaainalysis API) should be abstracted to allow dependency injection and supporting testing of feature extraction process in isolation.

Below is an example of a feature-extractor implemented in go. It utilises plugins for interfacing with external systems for feature extraction and gadgets (i.e. interpreting EVM call-traces) for specialized feature extraction.

package highRiskCat

import (
	"encoding/json"

	. "github.com/0xbow-io/asp-spec-V1.0/pkg/feature/extraction/storage"
	. "github.com/0xbow-io/asp-spec-V1.0/pkg/feature/extraction/extractors"
	. "github.com/0xbow-io/asp-spec-V1.0/pkg/feature/extraction/gadgets"
	. "github.com/0xbow-io/asp-spec-V1.0/pkg/feature/extraction/plugins"

	"github.com/swaggest/jsonschema-go"
)

var (
	// Plugin IDs
	plugins = []string{
		"PLUG_CA_01",
	}
	// Gadget IDs
	gadgets = []string{
		"GA_01",
	}
	// Cache IDs
	storages = []string{
		"DB_01",
	}
)

type feature struct {
	ID      string `json:"$id"`
	Minimum int    `json:"Minimum"`
	Maximum int    `json:"Maximum"`
	Type    string `json:"Type"`
	Default string `json:"Default"`
}

func applyFeatureSchema(feature *feature, spec *jsonschema.Schema) error {
	// quickest is to marshall then unmarshall
	b, error := spec.MarshalJSON()
	if error != nil {
		return error
	}
	return json.Unmarshal(b, feature)
}

type _Extractor struct {
	schema   []byte
	pluginCl PluginCl
	gadgetCl GadgetCl
	storageCl  StorageCl
}

var _ FeatureExtractor = (*_Extractor)(nil)

func Init(schema []byte) *_Extractor {
	ex := _Extractor{schema: schema}
	// init plugins
	for _, id := range plugins {
		if ex.pluginCl.Connect(id) != nil {
			return nil
		}
	}
	// init gadgets
	for _, id := range gadgets {
		if ex.gadgetCl.Connect(id) != nil {
			return nil
		}
	}
	// init cache
	for _, id := range storages {
		if ex.storageCl.Connect(id) != nil {
			return nil
		}
	}
	return &ex
}

// Implements FeatureExtractorMetadata interface
func (ex *_Extractor) Name() string             { return "HIGH_RISK_CATEGORY_EXTRACTOR" }
func (ex *_Extractor) Describe() string         { return "Extracting features for the HIGH_RISK Category" }
func (ex *_Extractor) Ver() string              { return "0.1.0" }
func (ex *_Extractor) Author() string           { return "0box.io" }
func (ex *_Extractor) License() string          { return "MIT" }
func (ex *_Extractor) URL() string              { return "github.com/0xbow.io/asp-v1.0/" }
func (ex *_Extractor) GetFeatureSchema() []byte { return ex.schema }

// Parses the Schmea to build a feature set
func (ex *_Extractor) featureSet() (set []feature) {
	var (
		category = jsonschema.Schema{}
	)
	if category.UnmarshalJSON(ex.schema) == nil {
		// extract features
		featureSet := category.Properties["features"].TypeObject.Items.SchemaArray
		set = make([]feature, len(featureSet))
		// iterate over features
		for i, feature := range featureSet {
			// apply feature schema
			applyFeatureSchema(&set[i], feature.TypeObject)
		}
	}
	return
}

func comparator(x [32]byte, y [32]byte) *Op { return nil }
func mtRoot(map[string][32]byte) [32]byte   { return [32]byte{} }
func (ex *_Extractor) sign(v []byte) []byte { return nil }
func (ex *_Extractor) ExtractFeatures(record Record) (out []byte) {
	return
}

4.4 Performance Considerations

Feature extraction can be computationally intensive and difficult to scale. Optimization strategies have been considered to ensure that feature extraction is efficient and scalable.

  1. Parallel Processing: Parallel & distributed feature extraction for independent features.
---
title: "Figure 4.1: Distributed Parallel Feature Extractors"
---


  %%{
    init: {
      'theme': 'base',
      'themeVariables': {
        'primaryColor': '#1e1e2e',
        'primaryTextColor': '#cdd6f4',
        'primaryBorderColor': '#89b4fa',
        'lineColor': '#fab387',
        'secondaryColor': '#181825',
        'tertiaryColor': '#1e1e2e',
        "clusterBorder": "#f2cdcd",
        'noteTextColor': '#f5e0dc',
        'noteBkgColor': '#f5c2e7',
        'notesBorderColor': '#cba6f7',
        'textColor': '#f5e0dc',
        'fontSize': '16px',
        'labelTextColor': '#f5e0dc',
        'actorBorder': '#89b4fa',
        'actorBkg': '#1e1e2e',
        'actorTextColor': '#f5e0dc',
        'actorLineColor': '#89b4fa',
        'signalColor': '#cdd6f4',
        'signalTextColor': '#f5e0dc',
        'messageTextColor': '#f5e0dc',
        'messageLine0TextColor': '#f5e0dc',
        'messageLine1TextColor': '#f5e0dc',
        'loopTextColor': '#f5e0dc',
        'activationBorderColor': '#f5c2e7',
        'activationBkgColor': '#1e1e2e',
        'sequenceNumberColor': '#1e1e2e'
      }
    }
  }%%

   graph TD
     A[Record] --> B[Feature Extractor A]
     B --> C[Extract Feature A]
     A --> D[Feature Extractor B]
     D --> E[Extract Feature B]
     A --> F[Feature Extractor C]
     F --> G[Extract Feature C]
     C --> H[Combine Features]
     E --> H
     G --> H
     H --> I[Feature Set]
  1. Caching: Caching of intermediate results for frequently accessed data.

  2. Optimized Data Structures: Optimise schemas to allow efficient data structures for feature representation and manipulation.

  3. Batched Processing: Process multiple records in batches to amortize overhead costs.

  4. Feature Selection: Features selected carefully to provide the most discriminative power for downstream tasks.

Performance can be quantified using the following metrics:

  1. Extraction Throughput
  2. Feature Extraction Latency
  3. Memory Efficiency

5.1 Rule-Based Classification

Who set the rules?

The responsibility of defining a category may be assigned to either an individual entity or a collaboration of entities such as:

  • The ASP enttity
  • The protocol entity
  • The network governance entity
  • The regulatory entity
## 5.1.1 Overview

The classification process evaluates records with a set of rules to determine it’s categories.

The ASP considers that an object categorized as is therefore compliant and must have satisfied all the rules of , i.e.:

  • A transaction is categorised as AML compliant because it has satisfied AML rules such as Sender is not in the OFAC list.
  • A person is categorised as KYC compliant because it has satisfied KYC rules such as Person has provided a valid ID.
  • A vote is categorised as Valid Vote because it has satisfied voting rules such as Vote is casted within the voting period.
  • A document is categorised as Approved Document because it has satisfied document rules such as Document is signed & reviewed by an auditor.

1. The Compliance Domain:

Given a compliance predicate and it’s propositional function:

  • Where is a set of atomic rules:

is considered a Compliance Domain only if the following holds true:

Where:

  • is compliant with if and only if satisfies all the rules in .
  • is not compliant with if and only if does not satisfy all the rules in .
  • is a set of transactions that are compliant with .
  • is a set of transactions that are not compliant with .
  • and are subsets of the universal set .

Example:

If the statements are true:

  • A set of transactions () exists such that each transaction () is AML () compliant.
  • A set of transactions () exists such that each transaction () is not AML () compliant.
  • and belong to a super set of transactions .

Then Compliance Domain () exists as a set of rules for which holds true for all transactions in and false for all transactions in .

2. Defining Compliance Rules:

By identifiying () and set & it is then possible to derive the compliance rules which governs :

  • I.e., The rules for AML compliance could be:

    • Sender & receiver is not in the OFAC list
    • Sender & receiver is not in the FATF list
    • Transaction amount is less than 1000 USD.
  • These rules are verified against a set of:

    • AML compliant transactions ()
    • non-AML compliant transactions ().

Tip

Rules should be represented as atomic & independent and therefore avoid any:

  • Representations indicating hierarchical structure of rulesi.e.:

    • Transaction is not from a sanctioned entity
      • Sender is not in the OFAC list
  • Representations indicating dependencies between rules i.e.:

    • Transaction is not from a sanctioned entity
      • Sender is not in the OFAC list
      • Sender is not in the FATF list
### 2. Deriving Categories:

A category is a translation of wherefeatures & thresholds represents the atomic rules of :

  • AML
    • Category: AML_COMPLIANT
  • Sender & receiver is not in the OFAC list
    • Feature: OFAC_LIST_MEMBERSHIP
    • Threshold: false
    • OFAC_LIST_MEMBERSHIP is false for a record.
  • Sender & receiver is not in the FATF list
    • Feature: FATF_LIST_MEMBERSHIP
    • Threshold: false
    • FATF_LIST_MEMBERSHIP is false for a record.
  • Transaction amount is less than 1000 USD.
    • Feature: TRANSACTION_AMOUNT
    • Threshold: 1000
    • TRANSACTION_AMOUNT is less than 1000 for a record.

This translation (see 4.2 Feature Types and Formats) results in a JSON Schema document such as:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "$id": "tag:0xbow.io,2024:categories:AML_COMPLIANT",
  "title": "Record is AML Compliant",
  "type": "object",
  "properties": {
    "features": {
      "type": "object",
      "properties": {
        "OFAC_LIST_MEMBERSHIP": {
          "$id": "tag:0xbow.io,2024:categories:AML_COMPLIANT:features:OFAC_LIST_MEMBERSHIP",
          "type": "boolean",
          "default": "true"
        },
        "FATF_LIST_MEMBERSHIP": {
          "$id": "tag:0xbow.io,2024:categories:AML_COMPLIANT:features:FATF_LIST_MEMBERSHIP",
          "type": "boolean",
          "default": "true"
        },
        "TRANSACTION_AMOUNT": {
          "$id": "tag:0xbow.io,2024:categories:AML_COMPLIANT:features:TRANSACTION_AMOUNT",
          "type": "integer",
          "default": "1000"
        }
      },
      "required": [
        "OFAC_LIST_MEMBERSHIP",
        "FATF_LIST_MEMBERSHIP",
        "TRANSACTION_AMOUNT"
      ]
    }
  },
  "required": ["features"]
}

Note

Features can be considered as the properties or attributes of a record which is later evaluated against the compliance rules.

i.e. The rule Sender & receiver is not in the OFAC list evaluates the value of 'OFAC_LIST_MEMBERSHIP' which is of a boolean type.

There are no constraints on the type of features that can be used in the schema. It could be a string, integer, boolean, array, object etc.

However it should not include or point/reer to another feature nor be a function of another feature.

### 3. Classifying a record :

Note that default values are set to invalidate the category for the record.

The default document is a document which satisfy the Schema but where all features have values set to its default value.

{
  "features": {
    "OFAC_LIST_MEMBERSHIP": true,
    "FATF_LIST_MEMBERSHIP": true,
    "TRANSACTION_AMOUNT": 1000
  }
}

As outlined in section 4.3, the Feature Extractor delivers the values of these features in the form of JSON patch operations.

"patch": [
  {
    "op": 2,
    "root": "0x010010",
    "$id": "tag:0xbow.io,2024:categories:AML_COMPLIANT:features:OFAC_LIST_MEMBERSHIP",
    "value": "false",
    "merkle-proof": {}
  },
  {
    "op": 2,
    "root": "0x010010",
    "$d": "tag:0xbow.io,2024:categories:AML_COMPLIANT:features:FATF_LIST_MEMBERSHIP",
    "value": "false",
    "merkle-proof": {}
  },
  {
    "op": 2,
    "root": "0x010010",
    "$d": "tag:0xbow.io,2024:categories:AML_COMPLIANT:features:TRANSACTION_AMOUNT",
    "value": "100",
    "merkle-proof": {}
  }
]

If when applying these patches to the default document, the document still satisfies the schema and the features are consistent with the rules, then the record is classified as AML_COMPLIANT.

{
  "features": {
    "OFAC_LIST_MEMBERSHIP": false,
    "FATF_LIST_MEMBERSHIP": false,
    "TRANSACTION_AMOUNT": 100
  }
}

5.2 Category Bitmap Specification

5.2.1 Bitmap Vector

The bitmap is defined as:

Categories are mapped to a 256-bit vector where each bit represents a specific category based on a predefined schema. The bitmap can be efficiently stored as a 256-bit integer types or a byte array of size 32 (32 bytes).

Multiple bits indicates multiple categories, i.e.:

  • Bits 64-127: Protocol specific Categories
  • Bits 128-191: Network Specific Categories
  • Bits 192-255: Cross-chain Categories

This compact representation allows for efficient storage and querying of categorized records.

5.2.1 Pointers:

Pointers facilitate bitmap to bitmap referencing and enable the creation of complex category structures. Bitspaces can be reserved for pointers to other bitmaps as long as there is a clear schema for the pointer and a mapping function between the pointer and the referenced bitmap.

5.2.2 Partitions & Reservations:

A parition is a logical grouping of categories within a specific range of bits. Partitions are used to group categories based on their domain or ownership.

i.e. Bitspace 0-63 can be reserved for AML categories, while bitspace 64-127 can be reserved for KYC categories.

During the integration phase between an entity and an ASP, the entitity (i.e. a protocol) can reserve a specific range of bits for it’s categories, i.e. Bitspace 64-127 for Protocol X categories.

5.3 Updating Classification Rules

THe releveant entities must update categories to ensure the system remains effective in supporting compliance. The following steps outline some simple processes for updating categories:

  1. Rule Definition: Clearly define new categories or modifications to existing category.
  2. Rule Validation: Fuzzy test the features of categories to ensure accuracy & edge case handling.
  3. Version Control: Maintain versioning for categories, i.e. Git Repository containing category schemas.
  4. CICD: Implement a Continuous Integration/Continuous Deployment (CI/CD) pipelines for category updates.

5.4 Categorization Best Practices

  1. Simplify Categories: Excersise a reductionist approach to simplify categories to its most basic form.

  2. Multi-Label Classification: Allow records to belong to multiple categories simultaneously.

  3. Threshold Scores: Include threshold scores for each category assignment.

  4. Interpretability: Well define categories & features to ensure interpretability of the classification results.

  5. Cross-Protocol Consistency: Ensure consistent categorization across different protocols for similar state transitions.

  6. Version Control: Maintain strict version control.

  7. Auditability: Ensure that the categorization process is auditable.

  8. Privacy-Preserving Classification: Consider using homomorphic encryption or secure multi-party computation for privacy-sensitive features.

  9. Efficient Querying: Optimise category to record mapping for efficient querying of records based on category criteria.

5.4 Verifiable Classification

To ensure trustless operation of the ASP system and mitigate the risk of a malicious ASP, we introduce the concept of Verifiable Classification

Key components of Verifiable Classification:

Warning

The specific implementation of the ZKP system is WIP

1. **Zero-Knowledge Proof Generation**:

The classification process is to be implemented with zero-knoweldge DSL (i.e. Circom). This allows the ASP to generate a computation proof which verifies that the classification was done correctly without revealing the actual features or the feature extractor code.

Current thoughts on the approach:

  • Auto-Generate Circom Templates based on the category-feature schema.
    • At least 1 circuit per category.
  • Utilise folding schemes (i.e. NOVA) with libraries such as Sonobe
    • The bitmap is the shared state between all steps
    • Each step is 1 classification that sets the appropriate bit in the bitmap
    • 1 public output at the final step which is the final bitmap.
  1. On-chain Attestations:

Other ASPs or external parties can attest to the validity or correctness of an ASP record categorization on-chain through attestation channels such as EAS.

  1. Proof Verification:

External parties can verify the ZKP without accessing private features or extractor code.

5.6 Classification Dispute Resolution

Dispute Resolution Process

A dispute resolution process is necessary to handle conflicting categories:

---
title: "Figure 5.2: Dispute Resolution Workflow
"
---

%%{
  init: {
    'theme': 'base',
    'themeVariables': {
      'primaryColor': '#1e1e2e',
      'primaryTextColor': '#cdd6f4',
      'primaryBorderColor': '#89b4fa',
      'lineColor': '#fab387',
      'secondaryColor': '#181825',
      'tertiaryColor': '#1e1e2e',
      "clusterBorder": "#f2cdcd",
      'noteTextColor': '#f5e0dc',
      'noteBkgColor': '#f5c2e7',
      'notesBorderColor': '#cba6f7',
      'textColor': '#f5e0dc',
      'fontSize': '16px',
      'labelTextColor': '#f5e0dc',
      'actorBorder': '#89b4fa',
      'actorBkg': '#1e1e2e',
      'actorTextColor': '#f5e0dc',
      'actorLineColor': '#89b4fa',
      'signalColor': '#cdd6f4',
      'signalTextColor': '#f5e0dc',
      'messageTextColor': '#f5e0dc',
      'messageLine0TextColor': '#f5e0dc',
      'messageLine1TextColor': '#f5e0dc',
      'loopTextColor': '#f5e0dc',
      'activationBorderColor': '#f5c2e7',
      'activationBkgColor': '#1e1e2e',
      'sequenceNumberColor': '#1e1e2e'
    }
  }
}%%

graph TD
    A[Conflicting Classification Detected] --> B[Escalate to Committee]
    B --> C[Analyze Record and Proofs]
    C --> D[Committee Voting]
    D --> E{Majority Decision?}
    E -->|Yes| F[Update Categories]
    E -->|No| G[Extended Deliberation]
    G --> D
    F --> H[Refine Classification Process]
  1. Detection: Automated systems identify conflicting classifications for the same record.

  2. Escalation: Disputes are escalated to a resolution committee.

  3. Analysis: The committee examines:

    • Raw record data
    • Extracted features from each extractor
    • Applied classification rules
    • Verifiable Classification proofs
  4. Voting: Committee members vote on the correct classification.

  5. Resolution: The majority decision is applied, and the record’s classification is updated.

  6. Rule Refinement: If necessary, classification rules are updated to prevent similar future disputes.

Best Practices for Dispute Resolution

  1. Diverse Committee: Ensure the resolution committee includes members with varied expertise (e.g., protocol developers, cryptographers, domain experts).

  2. Transparent Process: Document and make public the dispute resolution process and outcomes.

  3. Timeboxed Resolution: Set strict timeframes for each stage of the dispute resolution process.

  4. Weighted Voting: Consider implementing a weighted voting system based on member expertise or stake.

  5. Appeal Mechanism: Allow for appeals of dispute resolutions under specific circumstances.

  6. Incentive Alignment: Implement a reward/penalty system for committee members based on the accuracy of their votes.

6.1 Smart Contract Specification

Info

The Public Registry is a collection of smart contracts which serves as the on-chain storage solution for the ASP. It provides the necessary interfaces for onchain protocols to integrate with the ASP.

The current [registry](https://github.com/0xbow-io/asp-contracts-V1.0) is composed of 2 core contracts:
---
title: "Figure 6.1: ASP Smart Contract Class Diagram"
---

%%{
  init: {
    'theme': 'base',
    'themeVariables': {
      'primaryColor': '#1e1e2e',
      'primaryTextColor': '#cdd6f4',
      'primaryBorderColor': '#89b4fa',
      'lineColor': '#fab387',
      'secondaryColor': '#181825',
      'tertiaryColor': '#1e1e2e',
      "clusterBorder": "#f2cdcd",
      'noteTextColor': '#f5e0dc',
      'noteBkgColor': '#f5c2e7',
      'notesBorderColor': '#cba6f7',
      'textColor': '#f5e0dc',
      'fontSize': '16px',
      'labelTextColor': '#f5e0dc',
      'actorBorder': '#89b4fa',
      'actorBkg': '#1e1e2e',
      'actorTextColor': '#f5e0dc',
      'actorLineColor': '#89b4fa',
      'signalColor': '#cdd6f4',
      'signalTextColor': '#f5e0dc',
      'messageTextColor': '#f5e0dc',
      'messageLine0TextColor': '#f5e0dc',
      'messageLine1TextColor': '#f5e0dc',
      'loopTextColor': '#f5e0dc',
      'activationBorderColor': '#f5c2e7',
      'activationBkgColor': '#1e1e2e',
      'sequenceNumberColor': '#1e1e2e'
    }
  }
}%%


classDiagram

  class AccessControl{
    ~Map~bytes32|Set~address~~ _roleMembers
    +grant(account)
    +revoke(role,account)
    +has(role,account)
  }


  class Registry {
    +bytes32 REGISTRY_ADMIN_ROLE
    -Map~uint256|MerkleTree~bytes32~~ _scope_record_trees
    -Map~uint256|Map~bytes32|bytes32~~ _scope_record_categories

    +setRecordCategory(scope,r,c)
    +getCategoryForRecord(scope,r)
    +getRecordAndCategoryAt(scope,index)
    +getRecordsAndCategories(scope, from, to)
    +getLatestForScope(scope)

  }

  class ASP{
    +applyPredicate(scope, records, categoryMask, predicateType)
    -_applyPredicate(predicateType, categoryMask, categoryBitmap)
  }

  class PredicateType{
      <<enumeration>>
      Intersection
      Union
      Complement
  }


  AccessControl --|> Registry : Inheritance
  Registry --|> ASP : Inheritance

  ASP "1" -- "*" PredicateType: contains

6.2 Data Structure and Storage

The Public Registry uses two primary data structures for efficient storage and querying:

  1. Scope & Record Hash based Categories Storage:

    • Implemented as:

      mapping(uint256 scope => EnumerableMap.Bytes32ToBytes32Map RecordToCategory) scopeRecordCategories

      • enables quick lookup of category bitmaps for a given record hash within a specific scope
      • uint256 scope is the scope identifier
      • EnumerableMap.Bytes32ToBytes32Map RecordToCategory is a mapping of record hashes to category bitmaps
      • EnumerableMap.Bytes32ToBytes32Map is imported from the EnumerableMap openzeppelin library
      • EnumerableMap.Bytes32ToBytes32Map is utilised for easy iteration over the set of record hashes.
  2. Scope based Record Merkle Trees:

    • Implemented as

      mapping(uint256 scope => LeanIMTData recordTree) scopeRecordTrees

      • Supports inclusion proof verification for a given record hash within a specific scope
      • uint256 scope is the scope identifier
      • LeanIMTData recordTree is the merkle-tree representation of the set of record hashes within the scope
      • LeanIMTData is imported from the InternalLeanIMT zk-kit library
      • InternalLeanIMT, LeanIMTData provides a gas-optimized merkle-tree implementation.

6.3 Query Interface

The Record Category Registry provides some public functions for querying the registry:


    /*//////////////////////////////////////////////////////////////////////////
    | PUBLIC FUNCTIONS
    //////////////////////////////////////////////////////////////////////////*/

    /**
     * @notice Returns the category bitmap for a record hash for a specific protocol scope
     * @param scope The protocol scope identifier
     * @param recordHash The hash of the record event
     * @return categoryBitmap The category bitmap for the record hash
     */
    function getCategoryBitmap(
        uint256 scope,
        bytes32 recordHash
    ) public view returns (bytes32 categoryBitmap) {
        (bool exists, bytes32 bitmap) = scopeRecordCategories[scope].tryGet(
            recordHash
        );
        if (!exists) {
            revert RecordNotFound(scope, recordHash);
        }
        return bitmap;
    }

    /**
     * @notice Returns the category bitmap for a record hash at a given index
     *          for a specific protocol scope
     * @param scope The protocol scope identifier
     * @param index The index of the record hash
     * @return recordHash recordHash at the given index
     * @return categoryBitmap The category bitmap for the record hash
     */
    function getRecordHashAndCategoryAt(
        uint256 scope,
        uint256 index
    ) public view returns (bytes32 recordHash, bytes32 categoryBitmap) {
        return scopeRecordCategories[scope].at(index);
    }

    /**
     * @notice Return the category bitmap for a record hash
     *          for a specific protocol scope
     # @dev does not revert if the record hash does not exist
     * @param scope The protocol scope identifier
     * @param recordHash The hash of the record event
     * @return exists A boolean indicating if the record hash exists
     * @return categoryBitmap The category bitmap for the record hash
     */
    function tryGetCategoryBitmap(
        uint256 scope,
        bytes32 recordHash
    ) public view returns (bool exists, bytes32 categoryBitmap) {
        return scopeRecordCategories[scope].tryGet(recordHash);
    }

    /**
     * @notice Returns the record hashes and their categories for a specific protocol scope between
     *         a given range
     * @param scope The protocol scope identifier
     * @param from The start index of the range
     * @param to The end index of the range
     * @return recordHashes The record hashes for the given range
     * @return categoryBitmaps The category bitmaps for the given range
     */
    function getRecordHashesAndCategories(
        uint256 scope,
        uint256 from,
        uint256 to
    )
        public
        view
        returns (
            bytes32[] memory recordHashes,
            bytes32[] memory categoryBitmaps
        )
    {
        require(
            from < to && to <= scopeRecordCategories[scope].length(),
            "Invalid range"
        );
        uint256 length = to - from;
        recordHashes = new bytes32[](length);
        categoryBitmaps = new bytes32[](length);

        for (uint256 i = 0; i < length; i++) {
            (recordHashes[i], categoryBitmaps[i]) = scopeRecordCategories[scope]
                .at(from + i);
        }
    }

    /**
     * @notice Returns the last record hash & it's category and the merkle-root
     *         for a specific protocol scope
     * @param scope The protocol scope identifier
     * @return root The merkle root for the protocol scope
     * @return recordHash The hash of the last known record event
     * @return categoryBitmap The category bitmap for the last known record event
     * @return index The index of the last known record event
     */
    function getLatestForScope(
        uint256 scope
    )
        public
        view
        returns (
            uint256 root,
            bytes32 recordHash,
            bytes32 categoryBitmap,
            uint256 index
        )
    {
        root = scopeRecordMerkleTrees[scope]._root();
        index = scopeRecordCategories[scope].length();
        if (index > 0) {
            (recordHash, categoryBitmap) = scopeRecordCategories[scope].at(
                index - 1
            );
        }
    }


function _applyPredicate(
    PredicateType predicateType,
    bytes32 characteristicFunction,
    bytes32 elementProperties
) internal pure returns (bool satisfiesPredicate) {
    if (predicateType == PredicateType.Intersection) {
        satisfiesPredicate =
            (elementProperties & characteristicFunction) ==
            characteristicFunction;
    } else if (predicateType == PredicateType.Union) {
        satisfiesPredicate =
            (elementProperties & characteristicFunction) != 0;
    } else if (predicateType == PredicateType.Complement) {
        satisfiesPredicate =
            (elementProperties & characteristicFunction) == 0;
    }
}

function applyPredicate(
    uint256 domain,
    bytes32[] calldata subset,
    bytes32 characteristicFunction,
    PredicateType predicateType
) public view returns (bytes32[] memory elements, uint256 setCardinality) {
    bytes32[] memory satisfyingElements = new bytes32[](subset.length);
    for (uint256 i = 0; i < subset.length; i++) {
        bytes32 element = subset[i];
        (bool isMember, bytes32 elementProperties) = tryGetCategoryBitmap(
            domain,
            element
        );
        if (!isMember) {
            continue;
        }
        if (
            _applyPredicate(
                predicateType,
                characteristicFunction,
                elementProperties
            )
        ) {
            satisfyingElements[setCardinality] = element;
            setCardinality++;
        }
    }
    assembly {
        mstore(satisfyingElements, setCardinality)
    }
    return (satisfyingElements, setCardinality);
}

Predicate types:

  • 0: Intersection (all bits in categoryMask must be set)
  • 1: Union (at least one bit in categoryMask must be set)
  • 2: Complement (no bits in categoryMask should be set)

6.4 Gas Optimization Strategies

7.1 ZKP System Overview

7.2 Proof Generation

7.3 Proof Verification

7.4 Security Considerations

8.1 End-User Requirements

8.2 Interacting with Protocols

8.3 Generating Compliance Proofs

8.4 Privacy Considerations

9.1 Policy Definition Language

9.2 Implementing Compliance Checks

9.3 Cross-Protocol Policies

9.4 Policy Update Mechanisms

10.1 Sharding Strategies

10.2 Layer 2 Solutions

10.3 Optimizing Off-Chain Components

10.4 Benchmarking and Monitoring

11.1 Threat Model

11.2 Encryption and Data Protection

11.3 Audit Trail Implementation

11.4 Incident Response

12.1 Governance Model

12.2 Upgrade Mechanisms

12.3 Backward Compatibility

12.4 Community Participation

13.1 Glossary

13.2 References

13.3 Data Schemas

13.4 API Spec

0xBow ASP V1.0

Feature Extractors

Compliance Predicate

ASP Contracts

Work-in-Progress

0xBow ASP Contracts are still in development. To check the current progress of the contracts head to the asp-contracts-v1.0 repository.

## Deployed Contracts:
chaincontractaddress
SepoliaAssociationSetProvider.sol0xaaf9..d286
GnosisAssociationSetProvider.sol0x9a64..f53E
Mainnet⚠️ Pending Deployment
Base⚠️ Pending Deployment

REST / gRPC API

Work-in-Progress

0xBow ASP API v1.0 is still in development and not production-ready. Here is the list of features to be available at launch:

  • REST API
    • Endpoints for querying records
    • Endpoints for generating zk-proofs
    • Endpoints for service status & health checks.
  • gRPC API
    • Support for synchronous & asynchronous queries
    • Data streaming
    • Support for private channels via Waku Waku
  • Webhooks
    • Support for Event push-notifications & streams

⚠️ REST API Endpoints built for older revisions of Privacy Pool will be deprecated soon. ⚠️

---

Overview:


0xBow ASP REST API v1.0:

Base URL: https://api.0xbow.io/api/{version}

0xBow ASP Rest API v1.0 provides a set of API endpoints to for querying records, generating association-sets, computing proofs and querying service status.

⚠️ These endpoints are not privacy-preserving as they are provided for convenience ⚠️

🟢 POST /api/v1/{set}

Depreciated

Description

Context:

Prior revisions of the ASP used binary classification to categorize records.

Initial version maintained 2 large sets of records to reflect this classification:

  • Inclusion Set: Record Hashes of records that passed compliance checks.
  • Exclusion Set: Record Hashes of records that failed compliance checks.

These sets were represented as a merkle-tree.

Any inserts or removal of record hashes would result in onchain emission of the new merkle root.

Later versions were optimised for onchain storage of sets to support onchain queries:

  • Rather than 1 large set, the sets were split into smaller sub-sets.
  • mtID is the unique Identifier were associated with each Subsets
    • which is a Hash of the tuple (chainID, contract address, set type)

This API Endpoint generates a new association-set based on the provided hashSet and hashFilter. It was tailored for Privacy Pool to support it’s proof-of-innocence mechanism.


Path Parameters

nametypedescription
setrequiredthe target set to query against

Possible values for {setType}

  1. inclusion: query the inclusion set
  2. exclusion: query the exclusion set

Query Parameters

nametypedata typeexampledescription
chainstring“sepolia”name of the chain where contract is deployed to
contractstring“0x8e3E…”privacy pool contract address
mt_idstring“0x1e1294…”unique identifier of a set
hash_onlybooleanfalseonly return set of record hashes
size_limitinteger20limits the size of the returned set to size_limit
pin_to_ipfsstringfalseflag for pinning association set to ipfs
randombooleantrueflag for randomising the record selection
needSortbooleantrueflag for sorting set by record Index

Body Parameters

nametypedata typedescription
hashSetstring arrayset of record hashes
hashFilterstringtype of filtering

Example for hashSet:

{
  "hashSet": [
    "113143e9dae0aa58d13b26dec085606d28fafe70582ec52fd5bbc08ae8d5b5c9",
    "1aa21d201f72b61e0e59bdd7a0ef62dced57e4e80fa180ff113a58dc3aeb8ea9",
    "18ba306635d7838c1378a9243c22487f906ec929a5a8d5c30f172a9bc5824d64",
    "2dca7e37ec7e31d0e56b456e6ed435ced4c506b6dada186f6a14907ecc50a37e"
  ]
}

Possible values for hashFilter::

  1. EXCEPT: Exclude the records in hashSet from the response.
{
  "hashFilter": ["EXCEPT"]
}
  1. INTERSECT:

Return only set of records that are both members of the hashSet and {set}.

{
  "hashFilter": ["INTERSECT"]
}
  1. UNION:

Return the union of the set of records from the response only for records that are members of the {set}.

{
  "hashFilter": ["UNION"]
}

Example cURL


API="api.0xbow.io"
ENDPOINT="/api/v1/inclusion"
CHAIN="sepolia"
CONTRACT="0x8e3E4702B4ec7400ef15fba30B3e4bfdc72aBC3B"
HASH_ONLY="false"
SIZE_LIMIT="20"
PIN_TO_IPFS="false"

URI="${API}${ENDPOINT}?"
URI+="chain=${CHAIN}&"
URI+="contract=${CONTRACT}&"
URI+="hash_only=${HASH_ONLY}&"
URI+="size_limit=${SIZE_LIMIT}&"
URI+="pin_to_ipfs=${PIN_TO_IPFS}"

curl --location --request POST $URI \
--header "Content-Type: application/json" \
--data "{
    \"hashSet\": [],
    \"hashFilter\": \"\"
}"

Responses

http codecontent-typeresponse
200application/json; charset=utf-8JSON Object

Example Response

{
    "uuid": "",
    "mtID": "1e1294aedb5c4bc78479c7cd09c163808d894bb37e61eadd73cdc8cedc85bf9f",
    "zero": "2fe54c60d3acabf3343a35b6eba15db4821b340f76e741e2249685ed4899af6c",
    "merkleRoot": "002915b4928a5b34454158b06c50777f555f307b7fcace62f666e1586ee899b1",
    "hashSet": [
        "113143e9dae0aa58d13b26dec085606d28fafe70582ec52fd5bbc08ae8d5b5c9",
        "1aa21d201f72b61e0e59bdd7a0ef62dced57e4e80fa180ff113a58dc3aeb8ea9",
        "18ba306635d7838c1378a9243c22487f906ec929a5a8d5c30f172a9bc5824d64",
        "2dca7e37ec7e31d0e56b456e6ed435ced4c506b6dada186f6a14907ecc50a37e",
        "10d6373c1464696f856fbfee98132e28166f0227a6e40ab501d5468ae73f1c22",
        "1658ef12bff2c2a6cd37f09e6f0686fba9514b8e17594752f898009f83cd6cfb",
        "2b06d56c6d1812babd87d3cd0127a8f4d92a56130bd57f11aacd51d8a4e634c3",
        "18e44125cbb1fe0d81d0c1694bde77ba35a2cb04dc1ee4d993809d919080da22",
        "03512b924c8c0d98a9ad40a1b9b934f83139adfc281fa120b755578a73457b63"
    ],
    "proofs": [
        {
            "record_hash": "113143e9dae0aa58d13b26dec085606d28fafe70582ec52fd5bbc08ae8d5b5c9",
            "record_data": {
                "txHash": "0xaa2243999994946b104ecdcc41e8b392043d9478347fa11782ed6ae411021ae5",
                "outputCommitment1": "0x00b4a16ff4129dcdcd100bc1cad317980302f243d6ca184480455876d50eff5a",
                "outputCommitment2": "0x294f8fbc010ab687a719c5849420a49cec93bc831122684491f2527cd2011eeb",
                "nullifierHash1": "0x070cf43476880e27f1728a1f2446a57317a6892ef9af99a0bfa93f8a4792e341",
                "nullifierHash2": "0x0c3bbbce67df72abb37f6a0f603182c8984392f27bdda38a30452c985015562d",
                "recordIndex": 14,
                "publicAmount": "0000000000000000000000000000000000000000000000000de0b6b3a7640000"
            },
            "merkle_proof": {
                "merkle_tree_max_depth": 4,
                "leaf": "0x113143e9dae0aa58d13b26dec085606d28fafe70582ec52fd5bbc08ae8d5b5c9",
                "leaf_index": 0,
                "path_root": "0x002915b4928a5b34454158b06c50777f555f307b7fcace62f666e1586ee899b1",
                "path_indices": [
                    0,
                    0,
                    0,
                    0
                ],
                "path_positions": [
                    1,
                    1,
                    1,
                    1
                ],
                "path_elements": [
                    "0x1aa21d201f72b61e0e59bdd7a0ef62dced57e4e80fa180ff113a58dc3aeb8ea9",
                    "0x2f7b5ca0810afc0422b315ebae2df141e67ed8487e864cb903e4590a7bd34403",
                    "0x12a24534a43a7a6f51a9beaa33b3676766f83ab6db907b590932a86f91ea0307",
                    "0x073f04e5838a95e2a635da7cbbf87b60cd974b5cad98b5638fd96e71cc5eb130"
                ]
            }
        },
        {
            "record_hash": "1aa21d201f72b61e0e59bdd7a0ef62dced57e4e80fa180ff113a58dc3aeb8ea9",
            "record_data": {
                "txHash": "0x9ac2b822f4147e4d915a846268cef946f988e67dd5da964049c30cf5bccb055c",
                "outputCommitment1": "0x0d45924f17aa19a6d5de9bf8c3ffcec906ae89f15989b065f93a44dc42fb7897",
                "outputCommitment2": "0x1e786708230d87cd775b2efd4f0822543a18b6c12d59eb2ab50bc8bd3b4d88aa",
                "nullifierHash1": "0x1e0e798d049cc18291c4212a90a0be8e795d1ae323c94f37613371eb1b3526e9",
                "nullifierHash2": "0x0b7b28d575321eef0b1b559ecc8826c5e0ccfa9b838b672784ca2371edfbc61c",
                "recordIndex": 22,
                "publicAmount": "00000000000000000000000000000000000000000000000000038d7ea4c68000"
            },
            "merkle_proof": {
                "merkle_tree_max_depth": 4,
                "leaf": "0x1aa21d201f72b61e0e59bdd7a0ef62dced57e4e80fa180ff113a58dc3aeb8ea9",
                "leaf_index": 1,
                "path_root": "0x002915b4928a5b34454158b06c50777f555f307b7fcace62f666e1586ee899b1",
                "path_indices": [
                    1,
                    0,
                    0,
                    0
                ],
                "path_positions": [
                    0,
                    1,
                    1,
                    1
                ],
                "path_elements": [
                    "0x113143e9dae0aa58d13b26dec085606d28fafe70582ec52fd5bbc08ae8d5b5c9",
                    "0x2f7b5ca0810afc0422b315ebae2df141e67ed8487e864cb903e4590a7bd34403",
                    "0x12a24534a43a7a6f51a9beaa33b3676766f83ab6db907b590932a86f91ea0307",
                    "0x073f04e5838a95e2a635da7cbbf87b60cd974b5cad98b5638fd96e71cc5eb130"
                ]
            }
        },
        {
            "record_hash": "18ba306635d7838c1378a9243c22487f906ec929a5a8d5c30f172a9bc5824d64",
            "record_data": {
                "txHash": "0x8646ba48685dda1fd4b771448276f6b6812131baeb4ff8413999f00a59fc60e9",
                "outputCommitment1": "0x0a6fd9b9f65f4173feb5fb6745a2321700aac5c9039b39f8f98588e108756664",
                "outputCommitment2": "0x0453b7ff55700c143c99d44d6a2262fd89281c5eb2e0c7d057c5bdd4d8c8b00d",
                "nullifierHash1": "0x1adbd6e04911395701ed60358784b01a1188ccacf8e93e31bc15412893218ffa",
                "nullifierHash2": "0x2e7af471ba95a488bd6e8ad35939c6d292f8cee54ec349db14019c604c378cde",
                "recordIndex": 34,
                "publicAmount": "0000000000000000000000000000000000000000000000000de0b6b3a7640000"
            },
            "merkle_proof": {
                "merkle_tree_max_depth": 4,
                "leaf": "0x18ba306635d7838c1378a9243c22487f906ec929a5a8d5c30f172a9bc5824d64",
                "leaf_index": 2,
                "path_root": "0x002915b4928a5b34454158b06c50777f555f307b7fcace62f666e1586ee899b1",
                "path_indices": [
                    0,
                    1,
                    0,
                    0
                ],
                "path_positions": [
                    3,
                    0,
                    1,
                    1
                ],
                "path_elements": [
                    "0x2dca7e37ec7e31d0e56b456e6ed435ced4c506b6dada186f6a14907ecc50a37e",
                    "0x0be6c215dddf4f423e478127405b6d33412378e10b191a6f093183dd45d7680b",
                    "0x12a24534a43a7a6f51a9beaa33b3676766f83ab6db907b590932a86f91ea0307",
                    "0x073f04e5838a95e2a635da7cbbf87b60cd974b5cad98b5638fd96e71cc5eb130"
                ]
            }
        },
        {
            "record_hash": "2dca7e37ec7e31d0e56b456e6ed435ced4c506b6dada186f6a14907ecc50a37e",
            "record_data": {
                "txHash": "0x8ac9b6bfc96bf159dca7c46328bd8121a7f70692c49cbe9c13df2292f1427c98",
                "outputCommitment1": "0x2d4e39c4f62f1c029f22e94a0b54a57a27e5d2919392f707bf31572b2df9576d",
                "outputCommitment2": "0x0b3833c986b86e6d4a16eae2bf14f5b9e34bc0911bd769f81e8604568eb939ee",
                "nullifierHash1": "0x178189eb340940570b5f3a74459262b1ff898057391fdd2ad97d012087baa14a",
                "nullifierHash2": "0x17fccb01a5ad7cfb5e4a2e7295dbf149de863b4874346974d1de15eff8f3dbde",
                "recordIndex": 36,
                "publicAmount": "0000000000000000000000000000000000000000000000000de0b6b3a7640000"
            },
            "merkle_proof": {
                "merkle_tree_max_depth": 4,
                "leaf": "0x2dca7e37ec7e31d0e56b456e6ed435ced4c506b6dada186f6a14907ecc50a37e",
                "leaf_index": 3,
                "path_root": "0x002915b4928a5b34454158b06c50777f555f307b7fcace62f666e1586ee899b1",
                "path_indices": [
                    1,
                    1,
                    0,
                    0
                ],
                "path_positions": [
                    2,
                    0,
                    1,
                    1
                ],
                "path_elements": [
                    "0x18ba306635d7838c1378a9243c22487f906ec929a5a8d5c30f172a9bc5824d64",
                    "0x0be6c215dddf4f423e478127405b6d33412378e10b191a6f093183dd45d7680b",
                    "0x12a24534a43a7a6f51a9beaa33b3676766f83ab6db907b590932a86f91ea0307",
                    "0x073f04e5838a95e2a635da7cbbf87b60cd974b5cad98b5638fd96e71cc5eb130"
                ]
            }
        },
        {
            "record_hash": "10d6373c1464696f856fbfee98132e28166f0227a6e40ab501d5468ae73f1c22",
            "record_data": {
                "txHash": "0xb6d72c74eb00a15bee2af1814cc1084b861d4538f8a4ef5650c248f3e418ca44",
                "outputCommitment1": "0x276c2fecf765137a15625f8696b25d1b39e402b8ed43950893962f72ca22c0fb",
                "outputCommitment2": "0x210d1818699a883053297c4ac920ab883c30472b821679255f2a6032e2a26316",
                "nullifierHash1": "0x015fa37c5f43504ba940b60361437b3be94830e6f4bee5a359381d2bf8e1e2bd",
                "nullifierHash2": "0x1f369fab9b7db862645106e6a3a07c7df3a5b2ce1b256aad5ee599410c14dff3",
                "recordIndex": 42,
                "publicAmount": "000000000000000000000000000000000000000000000000016345785d8a0000"
            },
            "merkle_proof": {
                "merkle_tree_max_depth": 4,
                "leaf": "0x10d6373c1464696f856fbfee98132e28166f0227a6e40ab501d5468ae73f1c22",
                "leaf_index": 4,
                "path_root": "0x002915b4928a5b34454158b06c50777f555f307b7fcace62f666e1586ee899b1",
                "path_indices": [
                    0,
                    0,
                    1,
                    0
                ],
                "path_positions": [
                    5,
                    3,
                    0,
                    1
                ],
                "path_elements": [
                    "0x1658ef12bff2c2a6cd37f09e6f0686fba9514b8e17594752f898009f83cd6cfb",
                    "0x2bb5136f5053629d470a7df2ea75ea49885714c98802c4b16fd42fd4359a2166",
                    "0x2357f678f06b3729cd17d0232af0cb5597aeb9695690b93f4ed613772712bb72",
                    "0x073f04e5838a95e2a635da7cbbf87b60cd974b5cad98b5638fd96e71cc5eb130"
                ]
            }
        },
        {
            "record_hash": "1658ef12bff2c2a6cd37f09e6f0686fba9514b8e17594752f898009f83cd6cfb",
            "record_data": {
                "txHash": "0xfff8cddf0a21328713a3d81c6b8c6b33bc80a45e21ee79a40720434bd25bf164",
                "outputCommitment1": "0x13137f57d077844c7f951d78120ba1f7925dfd30b5f1c8a20d34a2bb76ef18ce",
                "outputCommitment2": "0x1a3281e4d22ef165d86f67d956a922394ee5f587fb7be397d6a030e1d4f44c5b",
                "nullifierHash1": "0x20d7d4d025426a2f9d42fb626caf3a46217bb83abcb9b2278dc492e3a9badb0d",
                "nullifierHash2": "0x12841a879639231880052240d425e06092c948f9b7991ae6a501077908624404",
                "recordIndex": 44,
                "publicAmount": "00000000000000000000000000000000000000000000000000038d7ea4c68000"
            },
            "merkle_proof": {
                "merkle_tree_max_depth": 4,
                "leaf": "0x1658ef12bff2c2a6cd37f09e6f0686fba9514b8e17594752f898009f83cd6cfb",
                "leaf_index": 5,
                "path_root": "0x002915b4928a5b34454158b06c50777f555f307b7fcace62f666e1586ee899b1",
                "path_indices": [
                    1,
                    0,
                    1,
                    0
                ],
                "path_positions": [
                    4,
                    3,
                    0,
                    1
                ],
                "path_elements": [
                    "0x10d6373c1464696f856fbfee98132e28166f0227a6e40ab501d5468ae73f1c22",
                    "0x2bb5136f5053629d470a7df2ea75ea49885714c98802c4b16fd42fd4359a2166",
                    "0x2357f678f06b3729cd17d0232af0cb5597aeb9695690b93f4ed613772712bb72",
                    "0x073f04e5838a95e2a635da7cbbf87b60cd974b5cad98b5638fd96e71cc5eb130"
                ]
            }
        },
        {
            "record_hash": "2b06d56c6d1812babd87d3cd0127a8f4d92a56130bd57f11aacd51d8a4e634c3",
            "record_data": {
                "txHash": "0x09f5d7c15f730477e75089671c26c74edcf2d3c13c030ae7f000d20689feb920",
                "outputCommitment1": "0x0d85948b189c8f06ba0e0ddc10465e7b38346fff32a5e7653e1b127bcb61bad1",
                "outputCommitment2": "0x282eed96a8315d3fef4a8b44a481ea321164d419e155d0f5cc670b1cbd8d922c",
                "nullifierHash1": "0x1b561048777361b051d2bfc6ea865c7b6227a7c74e5dcd22a52e41932727d2f9",
                "nullifierHash2": "0x230520e4c6f2b20f7c60c385f08fc55bcd16f94b0ab24be01c8cb9add6675984",
                "recordIndex": 46,
                "publicAmount": "00000000000000000000000000000000000000000000000000038d7ea4c68000"
            },
            "merkle_proof": {
                "merkle_tree_max_depth": 4,
                "leaf": "0x2b06d56c6d1812babd87d3cd0127a8f4d92a56130bd57f11aacd51d8a4e634c3",
                "leaf_index": 6,
                "path_root": "0x002915b4928a5b34454158b06c50777f555f307b7fcace62f666e1586ee899b1",
                "path_indices": [
                    0,
                    1,
                    1,
                    0
                ],
                "path_positions": [
                    7,
                    2,
                    0,
                    1
                ],
                "path_elements": [
                    "0x18e44125cbb1fe0d81d0c1694bde77ba35a2cb04dc1ee4d993809d919080da22",
                    "0x2fc47895108f3de39eea196f7d2bcc12a7253943e9f359c9bea03a76fed0f03e",
                    "0x2357f678f06b3729cd17d0232af0cb5597aeb9695690b93f4ed613772712bb72",
                    "0x073f04e5838a95e2a635da7cbbf87b60cd974b5cad98b5638fd96e71cc5eb130"
                ]
            }
        },
        {
            "record_hash": "18e44125cbb1fe0d81d0c1694bde77ba35a2cb04dc1ee4d993809d919080da22",
            "record_data": {
                "txHash": "0x11258c721b6a7b3dd2dbd63423b0ee4a6d410f5161b0b35372b95c328a2f1d54",
                "outputCommitment1": "0x0d610b983dcbb8ee71abe7718c42e1c60153d58533dd7e3fbc4f5bf070e389eb",
                "outputCommitment2": "0x25722dfcb5efc4e537ffe27163a5e4c35642d96c8e925e4c8c9a1ed3a1ed2ece",
                "nullifierHash1": "0x15d9acfd0d5e541ada7b4132c2df7d2602dcda7708e086fa61b9f13b0c4b4054",
                "nullifierHash2": "0x0961572c8fe8b211d84bfe62479405dd87a0e67611ce9511bddb0b065025e0be",
                "recordIndex": 102,
                "publicAmount": "00000000000000000000000000000000000000000000000000b1a2bc2ec50000"
            },
            "merkle_proof": {
                "merkle_tree_max_depth": 4,
                "leaf": "0x18e44125cbb1fe0d81d0c1694bde77ba35a2cb04dc1ee4d993809d919080da22",
                "leaf_index": 7,
                "path_root": "0x002915b4928a5b34454158b06c50777f555f307b7fcace62f666e1586ee899b1",
                "path_indices": [
                    1,
                    1,
                    1,
                    0
                ],
                "path_positions": [
                    6,
                    2,
                    0,
                    1
                ],
                "path_elements": [
                    "0x2b06d56c6d1812babd87d3cd0127a8f4d92a56130bd57f11aacd51d8a4e634c3",
                    "0x2fc47895108f3de39eea196f7d2bcc12a7253943e9f359c9bea03a76fed0f03e",
                    "0x2357f678f06b3729cd17d0232af0cb5597aeb9695690b93f4ed613772712bb72",
                    "0x073f04e5838a95e2a635da7cbbf87b60cd974b5cad98b5638fd96e71cc5eb130"
                ]
            }
        },
        {
            "record_hash": "03512b924c8c0d98a9ad40a1b9b934f83139adfc281fa120b755578a73457b63",
            "record_data": {
                "txHash": "0xe5a319daa4ee50aa447c9a8ea0ac560d0d637ec4cac030e8919016d905f1071f",
                "outputCommitment1": "0x1240adadbb08ec7e69f0751b164b56e21521f78c1b0eb499c96d92caf47442b4",
                "outputCommitment2": "0x1da2712b9feb81f3da5ce2e360e4bb8d77346e6d33ab2f16ac3dc8fd5a318e0b",
                "nullifierHash1": "0x19e34548f6a584dab328f1c3fc2e9277a1653c751734a316d39d7e6f53175c99",
                "nullifierHash2": "0x05dc56e8cb7458085d6c221a208d8afb406a37c1680d92949a10167edcf2bb87",
                "recordIndex": 152,
                "publicAmount": "000000000000000000000000000000000000000000000000016345785d8a0000"
            },
            "merkle_proof": {
                "merkle_tree_max_depth": 4,
                "leaf": "0x03512b924c8c0d98a9ad40a1b9b934f83139adfc281fa120b755578a73457b63",
                "leaf_index": 8,
                "path_root": "0x002915b4928a5b34454158b06c50777f555f307b7fcace62f666e1586ee899b1",
                "path_indices": [
                    0,
                    0,
                    0,
                    1
                ],
                "path_positions": [
                    0,
                    0,
                    0,
                    0
                ],
                "path_elements": [
                    "0x2fe54c60d3acabf3343a35b6eba15db4821b340f76e741e2249685ed4899af6c",
                    "0x1a332ca2cd2436bdc6796e6e4244ebf6f7e359868b7252e55342f766e4088082",
                    "0x2fb19ac27499bdf9d7d3b387eff42b6d12bffbc6206e81d0ef0b0d6b24520ebd",
                    "0x2706ceb05a41606d32b2995e5586beecfb8dad66a6dc4f2e68f0b5a8e01ecf29"
                ]
            }
        }
    ],
    "ipfsHash": "",
    "txHash": "",
    "status": "SUCCESS",
    "timestamp": 1723181615
}

🟡 POST /api/v1/records/filter

Endpoint is a Work-In-Progress

Description

Context:

Latest revision of the ASP implements the categorization process per ASP specification v1.0, where Records are classified with multiple categories mapped to 252 bits (category bitmap).

Recrod to Category Bitmap mapping is stored in the onchain Registry.

This endpoint provides a way to filter a set of record hashes


Body Parameters

nametypedata typedescription
scoperequiredstringunique identifier for protocol
subSetstring arrayset of record hashes
filterstringhex-encoded bitmap filter
typeenumdefines the type of predicate to apply
completebooleanflag for including complete record data

Example Body

{
  "scope": "0xd234x67851b11a21",
  "subset": [
    "0x113143e9dae0aa58d13b26dec085606d28fafe70582ec52fd5bbc08ae8d5b5c9",
    "0x1aa21d201f72b61e0e59bdd7a0ef62dced57e4e80fa180ff113a58dc3aeb8ea9",
    "0x8ba306635d7838c1378a9243c22487f906ec929a5a8d5c30f172a9bc5824d64",
    "0x2dca7e37ec7e31d0e56b456e6ed435ced4c506b6dada186f6a14907ecc50a37e"
  ],
  "filter": "0x1234567891011121",
  "type": 1,
  "complete": true
}

Example for hashSet:

{
  "hashSet": [
    "113143e9dae0aa58d13b26dec085606d28fafe70582ec52fd5bbc08ae8d5b5c9",
    "1aa21d201f72b61e0e59bdd7a0ef62dced57e4e80fa180ff113a58dc3aeb8ea9",
    "18ba306635d7838c1378a9243c22487f906ec929a5a8d5c30f172a9bc5824d64",
    "2dca7e37ec7e31d0e56b456e6ed435ced4c506b6dada186f6a14907ecc50a37e"
  ]
}

Possible values for hashFilter::

  1. EXCEPT: Exclude the records in hashSet from the response.
{
  "hashFilter": ["EXCEPT"]
}
  1. INTERSECT:

Return only set of records that are both members of the hashSet and {set}.

{
  "hashFilter": ["INTERSECT"]
}
  1. UNION:

Return the union of the set of records from the response only for records that are members of the {set}.

{
  "hashFilter": ["UNION"]
}

Responses

http codecontent-typeresponse
200application/json; charset=utf-8JSON Object

Example cURL


API="api.0xbow.io"
ENDPOINT="/api/v1/inclusion"
CHAIN="sepolia"
CONTRACT="0x8e3E4702B4ec7400ef15fba30B3e4bfdc72aBC3B"
HASH_ONLY="false"
SIZE_LIMIT="20"
PIN_TO_IPFS="false"

URI="${API}${ENDPOINT}?"
URI+="chain=${CHAIN}&"
URI+="contract=${CONTRACT}&"
URI+="hash_only=${HASH_ONLY}&"
URI+="size_limit=${SIZE_LIMIT}&"
URI+="pin_to_ipfs=${PIN_TO_IPFS}"

curl --location --request POST $URI \
--header "Content-Type: application/json" \
--data "{
    \"hashSet\": [],
    \"hashFilter\": \"\"
}"

Development Roadmap

0xBow began building a PoC Association-Set Provider (ASP) in late december 2023 with the goal of luanching the the first ASP service for Privacy Pool, a novel Zk-based privacy protocol.

Throughout Q1 of 2024, 0xBow continued to hit key milestones in the development of the ASP, proving it’s feasibility and utility for Privacy Pool. At EthDenver, 0xBow presented a live demo on stage, showcasing the ASP’s capabilities and potentials.

In Q2 2024, progress began to enter troubling waters as difficulties surrounding Privacy Pool delayed launch.

In late April, Chainway Labs handed over Privacy Pool to 0xBow to ensure it’s completion. A transition that was not without it’s challenges.

In Q3 2024, 0xBow has been working tirelessly on re-implementing Privacy Pool which involves revisions of the zk-circuits, smart contracts and the UI Webapp.

As 0xBow enters Q4 2024, 0xBow’s priorities remains unchanged:

  • Production readiness

    The ASP and Privacy Pool are close to completion and will be ready for launch soon.

    0xBow has excersised engineering due-dilligence during the development lifecycle of the ASP and Privacy Pool. Compliance is not just an attribute of 0xBow’s product, but also a core value of the organization.

  • Secure Partnerships

    0xBow is actively engaging other protocols and organizatios such as to integrate the ASP with other systems. Key partnerships will be announced soon.

  • Development Sustainability

    0xBow is committed to the long-term development of the ASP and Privacy Pool. 0xBow’s mission is to protect the future of on-chain privacy, and provide the infrastructure necessary to guarantee privacy as a public good.

    A more detailed roadmap for the next year and beyond will be published soon.

  • Growth

    0xBow is actively seeking to grow it’s team and is looking for talented individuals to join the team. If you are interested in contributing to the development of the ASP and Privacy Pool, please get in touch.

For Onchain Protocols

Protocol / DApp Integration Pathway

0xbow ASP V1.0 go-Buildkit

0xbow ASP v1.0 go-build-kit is a set of primitive Go modules for building custom ASP solutions.

With the go-build-kit, you can easily:

  • Interact with exisiting ASP service
  • Integrate ASP modules into your protocol / DApp
  • Build & deploy your own custom ASP services

go-build-kit is open-source and will be available soon.

0xBow ASP v1.0 implements an extensible `Integration Framework` which offers a broad range of functionality that can be readily integrated into custom solutions for your protocol / DApp.

0xBow ASP offers REST, gRPC and WebSocket APIs to support offchain integration and onchain contracts for onchain integration.

If your requirements are not met by these existing APIs, you can register for a custom integration with the ASP by following the steps below.

All integration efforts will contribute to the maturity & adoption of the ASP ❤️

How to Register?

You can find all prior registrations in the Protocol Registry page.

A Registration is the aknowledgement of an integration request and marks the beginning of the integration process. It is a formal step that allows both parties (i.e 0xBow ASP and Protocol X) to track the progress of the integration process.

To register, you will need to first to submit a new Integration Request issue in the asp-spec-v1.0a repository.

Be sure to specify the following details in your request:

  • Integration Type: Protocol/dAPP Integration

  • Integration Target: The name of your protocol / dApp (i.e. “Protocol X”)

  • Involvement: What’s your involvement with the protocol / dApp? (i.e. Developer / Engineer, Founder, etc.)

  • Contact Information: How can we reach you? (i.e. Email, Twitter, Telegram, etc.)

  • Integration Description: A brief description of your protocol / dApp and the integration requirements.

  • What are the possible integration options?

    0xBow has taken a modular approach to the ASP implementation, allowing for external integrations to be made with ease.

    Your protocol / DApp can leverage independent ASP services & modules to suit your specific requirements, i.e.:

    • Utilize the ASP Watcher service to observe and record protocol / DApp state-transitions:
      • Integrate observer & state-transition recorder modules into your services.
      • Or subscribe to Watcher aWebSocket endpoints to receive event streams.
    • Utilize the ASP Categorization Engine to categorize specific events:
      • Subscribe to Categorization Engine WebSocket endpoints
      • Request the categorization of a Record via gRPC or REST API.
      • Integrate the categorization pipeline into your services.
    • Leverage the Onchain ASP Public Registry or Offchain Record Archive to support business rules or compliant privacy-preserving mechanics (i.e. public inputs to onchain verifier contracts).

    Here are some example use-cases:


    Use-case 1: Restricted ERC20 Token Airdrop

    Protocol X is planning to airdrop ERC20 tokens to a restricted set of accounts.

    The conditions for the airdrop:

    • Account must have directly interacted with the Protocol.
    • Account must have a minimum balance of 1 ETH.
    • Account must have a minimum of 100 transactions.
    • Account is not directly & indirectly associated with any illicit activities.

    Integration Path:

    • ASP generates the schema for Airdrop Eligible category with features reflecting the specified conditions.
    • ASP will record all protocol interactions, categorize them and publish the category bitmaps to the Public Registry.
    • ASP will deploy registry-adapter contract which contains a mapping of Account addresses & record hashes as well as the bitmap filter for Airdrop Eligible.
    • The Airdrop can now integrate with the registry-adapter to ensure that only eligible accounts receive the airdrop.

    Use-case 2: Compliant ERC-4337 Paymaster

    Protocol Y wishes to implement compliant ERC-4337 Paymaster

    The compliance rules :

    • Account must have completed KYC verification.
    • Account’s UserOps are not associated with any illicit activities.

    Integration Path::

    • ASP generates the schema for COMPLIANT_ACCOUNT category with features reflecting the specified conditions.
    • ASP will record all protocol interactions, categorize them and publish the category bitmaps to the Public Registry.
    • ASP will deploy registry-adapter contract which contains a mapping of Account addresses & record hashes as well as the bitmap filter for COMPLIANT_ACCOUNT.
    • The Paymster can now integrate the registry-adpater into it’s validatePaymasterUserOp function to ensure that only compliant accounts can interact with the Paymaster.

  • What details are required for the Integration Description?

    Technical context on Protocol-ASP integration can be found in section 3.1 Protocol Requirements.

    The Integration Description should provide a brief overview of your protocol / DApp and the integration requirements. This should include:

    • A brief description of your protocol / DApp
    • The integration requirements
    • Any specific features or functionalities that you would like to integrate
    • Any specific modules or services that you would like to leverage

    The more detailed the Integration Description, the better we can understand your requirements and provide a tailored integration solution.

  • This is too confusing for me !

    If you are unsure about the registration process or how the ASP can be integrated with your protocol / DApp, feel free to reach out to us at 0xBow.io.

    We’re happy to guide you through the process and answer any questions you may have.

  • What happens after I submit the registration request?

    After submission, 0xBow will review the integration requirements, conduct workshop sessions to plan the integration process and deliver a detailed integration plan with timelines. Once complete, 0xBow will request for a signoff on the integration plan.

    Upon signoff, the integration request will be documented in the Protocol Registry page with links to the integration project tracking page.

Integration Options

For Networks

How To Register

Possible Integrations

For Existing ASP operators

How To Register

Integration Options

Protocol Registry

Privacy Pool

Aztec L2

Changelog

Contributors