Provide a comprehensive integration guide for protocols, end-users and compliance systems.
Establish a standardized framework for building and operating an ASP.
Define the technical specifications and interfaces for each component of the ASP architecture.
Outline best practices for ensuring security, scalability, and privacy in ASP implementations.
The Association-Set Provider (ASP) Specification V1.0 devloped by 0xBow.io defines a standardized
framework for implementing and operating or integrating with the ASP system
The ASP is designed to support compliance mechanisms for blockchain protocols, enabling
the verification of compliance with regulatory requirements and business rules.
It aims to enable privacy-preserving compliance for blockchain protocols such
as Privacy Pool, by leveraging zero-knowledge proofs (ZKPs) and
efficient data categorization techniques.
If you are an end-user looking to utilise the ASP or you are onboarding end-users
to a platform that utilises ASP services, refer to
8.2 Interacting with Protocols
If you are operating/implementing an ASP designed with specifications different to 0xBow ASP V1.0 and
wish to integrate with the 0xBow ASP system, please vierw the
ASP Interoperability section.
Instead of merely proving that their withdrawal is
linked to some previously-made deposit, a user proves membership in a more restrictive association set.
This association set could be the full subset of previously-made deposits, a set consisting only of the user’s
own deposits, or anything in between …
Users will subscribe to intermediaries, called association set providers (ASPs),
which generate association sets that have certain properties“ 1
The 0xBow ASP system is an implementation of the ASP concept initially introduced for Privacy Pool but now exteded to
facilitate compliance mechanisms across multiple blockchain protocols.
It’s architecture is relatively simple and consists of two main components:
Service Stack: 2 modular services that are working in concert to monitor, classify, and verify state transitions.
On-Chain Instances: Components supporting onchain integrations with the ASP.
1
["Blockchain privacy and regulatory compliance: Towards a practical equilibrium"](https://www.sciencedirect.com/science/article/pii/S2096720923000519),
Vitalik. B, Soleimani. A, et al., 2023
The Observer is a service that monitors & records the state-changes of specific protocols in real-time.
It is comprised of the following modules:
Watcher: Watches the network for signals (e.g. event emissions) by protocols that indicate a state-change has occured.
It requires protocol-specific components ( adapter & parser ) in order to interface with the protocol
at the network level.
It is possible to have a 1 to 1 implementation of the watcher module for each protocol,
or a single watcher module that can be configured to monitor multiple protocols via pluggable adapters & parsers.
State Transition Detector: Identifies and validates the state transitions signalled by the watcher module.
The Detector is notified of the state-change by the watcher module.
In response, it attemps to rebuild a representation of the state
from data cached in the state buffer.
The state buffer is a ring-buffer for efficient caching of new states and quick retrieval of old states.
And then compares the current state with the previous state usung the protocol’s state comparator function:
ΔS(s,s’)→δ
Where δ=0 indicates a state transition.
Record Generator: Creates cryptographic records of state transitions, defined as the tuple:
R=(Scope,e,h,h’)
Where:
Scope is the unique identifier of the protocol instance
e is the reference to the state transition event (i.e. the block number, transaction hash)
h is the pre-state hash
h’ is the post-state hash
Where h & h’ is computed by a state-hash function: H(s,e)→h
The Categorization Engine is crucial to the ASP’s ability in supporting compliance mechanisms & allow end-users to generate
“Associaiton Sets”.
The objective of the categorization is to correctly identify attributes or properties (expressed as categories)
of the state-transition event which are relevant to the compliance requirements specified by the protocol (or other entities)
The Categorization Engine executes a FIFO pipeline of feature-extraction, classification & categorization algorithms
to categorise the state-transition event referenced by the record R.
The output is a 256-bit vector termed “category bitmap” or B, where each bit represents a specific category.
The Category Pipeline is the sequential execution of:
Feature Extractors: Analyzes records to extract relevant features for classification.
Classifiers: Categorizes records based on extracted feature sets.
Categorizers: Generates a 256-bit category bitmap to reflect the record’s category/s
To register a protocol, please view the steps
outlined in the "For Onchain Protocols" section.
State transition monitoring involves observing the blockchain for relevant events
and state changes in integrated protocols.
---
title: "Figure 3.1: State Transition Monitoring Sequence Diagram"
---
%%{
init: {
'theme': 'base',
'themeVariables': {
'primaryColor': '#1e1e2e',
'primaryTextColor': '#cdd6f4',
'primaryBorderColor': '#89b4fa',
'lineColor': '#fab387',
'secondaryColor': '#181825',
'tertiaryColor': '#1e1e2e',
"clusterBorder": "#f2cdcd",
'noteTextColor': '#f5e0dc',
'noteBkgColor': '#f5c2e7',
'notesBorderColor': '#cba6f7',
'textColor': '#f5e0dc',
'fontSize': '16px',
'labelTextColor': '#f5e0dc',
'actorBorder': '#89b4fa',
'actorBkg': '#1e1e2e',
'actorTextColor': '#f5e0dc',
'actorLineColor': '#89b4fa',
'signalColor': '#cdd6f4',
'signalTextColor': '#f5e0dc',
'messageTextColor': '#f5e0dc',
'messageLine0TextColor': '#f5e0dc',
'messageLine1TextColor': '#f5e0dc',
'loopTextColor': '#f5e0dc',
'activationBorderColor': '#f5c2e7',
'activationBkgColor': '#1e1e2e',
'sequenceNumberColor': '#1e1e2e'
}
}
}%%
sequenceDiagram
participant BC as Protocol
participant PM as Watcher
participant STD as State Transition Detector
participant RG as Record Generator
BC->>PM: Event signal (e)
PM->>STD: Forward Event signal (e)
STD->>STD: Reconstruct States (s, s')
STD->>STD: Compare States (ΔS)
alt δ ≠ 0
STD->>RG: Detected State-Transition (e, s, s')
RG->>RG: Hash(s') for h'
RG->>RG: Hash(s) for h
alt h' ≠ h
RG->>RG: Record State-Transition (e, s, s')
else h'= h
RG->>RG: Discard
end
else δ = 0
STD->>STD: Discard
end
The process follows these steps:
Protocol Registration: Protocols are registered with the ASP system, providing their Scope function and event signatures.
Event Listening: The Watcher subscribes to signals (event emission) from registered protocols that indicates a state change
and forwards the event signal e to the State Transition Detector.
State Reconstruction:
Important
It is expensive & inefficient for the Observer to reconstruct or store the entire state of the protocol.
s and s′ are only state representations / proofs which carry enough information to verify a state-transition with
comparator function ΔS.
For example: s could be a merkle-proof of a state-root, and s′ could be the new state-root.
With a well-defined state-space S and state transition-function T, s and/or s’ is reconstructed from
data carried by e and the cached pre-image read from the state-buffer.
State Comparison: The ΔS function is applied to determine if a meaningful state transition has occurred:
δ=ΔS(s,s’)
Where δ=0 indicates a state transition.
Trigger Record Generation: Tuple (e,s,s’) is sent to the Record Generator to compose a cryptographic record of the state transition if δ=0.
Record Generation: The Record Generator hashes the new state s’ and the previous state s to create a record of the state transition:
R=(Scope,e,h,h’)
Where h and h’ are the hashes of the previous and new states and h=h’.
A Record (R) is a data structure that captures the state transition of a protocol instance and
can be represented as a tuple:
R=(Scope,e,h,h’)
Where:
Scope is the unique identifier of the protocol instance
e is the reference to the state transition event (i.e. the block number, transaction hash, log index)
h is the pre-state hash
h’ is the post-state hash
h=h’
Records exsists as serialized binary-object.
The construction of the Recrod object is performed by the Record Generator, a component of the ASP system.
The Record Generator performs the following steps:
Compute the Scope using the protocol’s Scope function if not allready provided.
Compute h and h’ using the protocol’s state hash function H.
To see all possible integrations with the ASP see this section.
When integrating a protocol with the ASP system, consider the following best practices:
Efficient State Representation: Design the state space S to be as compact as possible while still capturing all relevant information.
Granular Events: Emit fine-grained events for state changes to allow precise monitoring and record generation.
Optimized Hash Functions: Implement efficient hash functions for Scope and H to minimize computational overhead.
Example of a Scope function in Solidity:
function computeScope() public view returns (bytes32) {
return keccak256(abi.encodePacked(
address(this),
block.chainid,
_VERSION
));
}
Versioning: Include protocol version information in the Scope to handle protocol upgrades gracefully.
Gas Optimization: For on-chain components, optimize gas usage in event emission and state transitions.
Privacy Considerations: Ensure that emitted events and exposed state do not leak sensitive information.
Deterministic Implementations: Guarantee deterministic behavior in all protocol functions to ensure consistent record generation across different nodes.
Cross-Chain Compatibility: For protocols operating across multiple chains, ensure the Scope function incorporates chain identifiers.
Testnet Integration: Always test ASP integration on testnets before deploying to mainnet.
Documentation: Provide comprehensive documentation of the protocol’s state space,
transition functions, and event structures to facilitate seamless integration.
Feature-extraction refers to the process of transforming raw-data into numerical features that can
be processed whilst preserving the information in the original data set.
Feature Extraction is category driven whereby features are properties, attributes,
or characteristics of a Record that are meaningfuly grouped together to form a category:
Examples Features:
Accounts associated with the Record beloging to a certain list, i.e. wallet address that executed the transaction:
Feature: ACCOUNT_BLACKLISTED
Asset exposure to certain sources or activities such as Gambling, Money Laundering, etc.
Feature: ASSET_EXPOSURE_MONEYLAUNDERING
Volume of Assets transferred by a particular list of accounts
Feature: ACCOUNTLIST_ASSET_TRANSFER_VOLUME
Time of the day when the transaction was executed
Feature: TRANSACTION_TIME
If a certain internal call went against a specific Policy
Feature: POLICY_0X1A0B_VIOLATION
Example Categories:
UNAPPROVED category that groups the following features:
POLICY_0X1A0B_VIOLATION
ACCOUNTLIST_ASSET_TRANSFER_VOLUME
APPROVED category that groups the following features:
ACCOUNT_BLACKLISTED
ASSET_EXPOSURE_MONYLAUNDERING
SUSPICIOUS category that groups the following features:
ACCOUNT_BLACKLISTED
ASSET_EXPOSURE_MONEYLAUNDERING
The Feature Extractor is responsible for extracting these features from the Record and delivering
them in a structured format. It’s interface should be minimalistic and easy to implement.
Below is an example interface for the Feature Extractor:
package extractor
/// FeatureExtractorMetadata
/// provides metadata about the Feature Extractor
type FeatureExtractorMetadata interface {
getName() string
getDescription() string
getVersion() string
getAuthor() string
getLicense() string
getURL() string
// getFeatureSchema returns
// the serialized JSON schema for the Feature
getFeatureSchema() ([]byte)
getPluginList() ([]string)
}
type FeatureExtractor interface {
FeatureExtractorMetadata
// extractFeatures extracts
// features from the given Record
extractFeatures([]byte) ([]byte)
}
The extractFeatures function is the core method which processes a given Record object and returns data
required for the classification process.
Valuable and meaningful classification depends on the feature-extractor’s ability to extract the required features.
To assist with this, the ASP system should define a set of features that are relevant to the classification process.
Features should be represented in a standardized & structured and verfiable format to ensure compatibility
across different components of the ASP system & facilitate interoperability with external systems.
JSON Schema’s like the one above can be Generated by a schema generator.
Below is an example walkthrough in creating a schema generater in go using the
swaggest/jsonschema-go package:
Feature extraction can delegate / leverage specialized services via plugins, i.e:
Chainalysis can extract AML-Compliance related features.
An adpater for Chainalaysis API can be implemented as a plugin.
Utility of this plugin allow feature extraction to leverage Chainalysis services.
Speialized feature-extraction logic / algorithms can be implemented as gadgets, i.e:
EVM-Call-Tracer gadget for fast-tracing of internal calls.
Token-Tracker gadget for tracking all token movements.
### 4.3.1 Considerations:
As Faults in feature extraction can have downstream impact in the classification process,
implementation of a feature extractor must take in account the
complexity of the feature extraction process to ensure that:
The process is reliable and deterministc to avoid errors.
THe process is verifiable to ensure that the extracted features are correct.
The process is designed to be as fast as possible to avoid delays in the classification process.
The process is designed to be as efficient as possible to avoid unnecessary resource consumption.
The process is designed to be as secure as possible to avoid data leaks.
The process is designed to be as scalable as possible to handle large volumes of data.
To elimniate unwanted influence by other components, the feature-extractor is designed to:
Accept only the Record as input.
Be integrated with access-control systems with policies to restrict access
to the feature-extraction configuration or runtime environment.
Log any incoming data packets from external sources to ensure data integrity.
Outputs of the feature extraction process is serialized data that is encoded in a verifiable format.
The extractor should sign it’s output to ensure that the data is not tampered with during transit.
The output should not be a stateful object such as a document or a database record consisting
of the extracted features. Instead it is a set of verfiable JSON Patch operations
that can be applied to a known & verified state to derive the extracted features, i.e.:
These operations is applied to a default feature-document which is a document containing the features for a category (per Schema)
but with default values (encrypted or encoded values).
This document is represented as a
merkle-tree where then merkle-proofs is used to verify data integrity.
The feature extraction process should be modularized to allow for easy extension, testing and maintenance.
Interfaces to external systems (i.e. Chaainalysis API) should be abstracted to allow dependency injection
and supporting testing of feature extraction process in isolation.
Below is an example of a feature-extractor implemented in go.
It utilises plugins for interfacing with external systems for feature extraction
and gadgets (i.e. interpreting EVM call-traces) for specialized feature extraction.
package highRiskCat
import (
"encoding/json"
. "github.com/0xbow-io/asp-spec-V1.0/pkg/feature/extraction/storage"
. "github.com/0xbow-io/asp-spec-V1.0/pkg/feature/extraction/extractors"
. "github.com/0xbow-io/asp-spec-V1.0/pkg/feature/extraction/gadgets"
. "github.com/0xbow-io/asp-spec-V1.0/pkg/feature/extraction/plugins"
"github.com/swaggest/jsonschema-go"
)
var (
// Plugin IDs
plugins = []string{
"PLUG_CA_01",
}
// Gadget IDs
gadgets = []string{
"GA_01",
}
// Cache IDs
storages = []string{
"DB_01",
}
)
type feature struct {
ID string `json:"$id"`
Minimum int `json:"Minimum"`
Maximum int `json:"Maximum"`
Type string `json:"Type"`
Default string `json:"Default"`
}
func applyFeatureSchema(feature *feature, spec *jsonschema.Schema) error {
// quickest is to marshall then unmarshall
b, error := spec.MarshalJSON()
if error != nil {
return error
}
return json.Unmarshal(b, feature)
}
type _Extractor struct {
schema []byte
pluginCl PluginCl
gadgetCl GadgetCl
storageCl StorageCl
}
var _ FeatureExtractor = (*_Extractor)(nil)
func Init(schema []byte) *_Extractor {
ex := _Extractor{schema: schema}
// init plugins
for _, id := range plugins {
if ex.pluginCl.Connect(id) != nil {
return nil
}
}
// init gadgets
for _, id := range gadgets {
if ex.gadgetCl.Connect(id) != nil {
return nil
}
}
// init cache
for _, id := range storages {
if ex.storageCl.Connect(id) != nil {
return nil
}
}
return &ex
}
// Implements FeatureExtractorMetadata interface
func (ex *_Extractor) Name() string { return "HIGH_RISK_CATEGORY_EXTRACTOR" }
func (ex *_Extractor) Describe() string { return "Extracting features for the HIGH_RISK Category" }
func (ex *_Extractor) Ver() string { return "0.1.0" }
func (ex *_Extractor) Author() string { return "0box.io" }
func (ex *_Extractor) License() string { return "MIT" }
func (ex *_Extractor) URL() string { return "github.com/0xbow.io/asp-v1.0/" }
func (ex *_Extractor) GetFeatureSchema() []byte { return ex.schema }
// Parses the Schmea to build a feature set
func (ex *_Extractor) featureSet() (set []feature) {
var (
category = jsonschema.Schema{}
)
if category.UnmarshalJSON(ex.schema) == nil {
// extract features
featureSet := category.Properties["features"].TypeObject.Items.SchemaArray
set = make([]feature, len(featureSet))
// iterate over features
for i, feature := range featureSet {
// apply feature schema
applyFeatureSchema(&set[i], feature.TypeObject)
}
}
return
}
func comparator(x [32]byte, y [32]byte) *Op { return nil }
func mtRoot(map[string][32]byte) [32]byte { return [32]byte{} }
func (ex *_Extractor) sign(v []byte) []byte { return nil }
func (ex *_Extractor) ExtractFeatures(record Record) (out []byte) {
return
}
Feature extraction can be computationally intensive and difficult to scale.
Optimization strategies have been considered to ensure that feature extraction is efficient and scalable.
Parallel Processing: Parallel & distributed feature extraction for independent features.
If when applying these patches to the default document, the document still satisfies the schema
and the features are consistent with the rules,
then the record is classified as AML_COMPLIANT.
Categories are mapped to a 256-bit vector where each bit represents a specific category
based on a predefined schema.
The bitmap can be efficiently stored as a 256-bit integer types or a byte array of size 32 (32 bytes).
Pointers facilitate bitmap to bitmap referencing and enable the creation of complex category structures.
Bitspaces can be reserved for pointers to other bitmaps as long as there is a clear schema for the pointer
and a mapping function between the pointer and the referenced bitmap.
A parition is a logical grouping of categories within a specific range of bits.
Partitions are used to group categories based on their domain or ownership.
i.e. Bitspace 0-63 can be reserved for AML categories, while bitspace 64-127 can be reserved for KYC categories.
During the integration phase between an entity and an ASP, the entitity (i.e. a protocol)
can reserve a specific range of bits for it’s categories, i.e. Bitspace 64-127 for Protocol X categories.
THe releveant entities must update categories to ensure the system remains effective in supporting compliance.
The following steps outline some simple processes for updating categories:
Rule Definition: Clearly define new categories or modifications to existing category.
Rule Validation: Fuzzy test the features of categories to ensure accuracy & edge case handling.
Version Control: Maintain versioning for categories, i.e. Git Repository containing category schemas.
CICD: Implement a Continuous Integration/Continuous Deployment (CI/CD) pipelines for category updates.
The specific implementation of the ZKP system is WIP
1. **Zero-Knowledge Proof Generation**:
The classification process is to be implemented with zero-knoweldge DSL (i.e. Circom). This allows
the ASP to generate a computation proof which verifies that the classification was done correctly
without revealing the actual features or the feature extractor code.
Current thoughts on the approach:
Auto-Generate Circom Templates based on the category-feature schema.
At least 1 circuit per category.
Utilise folding schemes (i.e. NOVA) with libraries such as Sonobe
The bitmap is the shared state between all steps
Each step is 1 classification that sets the appropriate bit in the bitmap
1 public output at the final step which is the final bitmap.
On-chain Attestations:
Other ASPs or external parties can attest to the validity or correctness of an ASP record categorization
on-chain through attestation channels such as EAS.
Proof Verification:
External parties can verify the ZKP without accessing private features or extractor code.
The Public Registry is a collection of smart contracts which serves as the on-chain storage solution for the ASP.
It provides the necessary interfaces for onchain protocols to integrate with the ASP.
The current [registry](https://github.com/0xbow-io/asp-contracts-V1.0) is composed of 2 core contracts:
The Record Category Registry provides some public functions for querying the registry:
/*//////////////////////////////////////////////////////////////////////////
| PUBLIC FUNCTIONS
//////////////////////////////////////////////////////////////////////////*/
/**
* @notice Returns the category bitmap for a record hash for a specific protocol scope
* @param scope The protocol scope identifier
* @param recordHash The hash of the record event
* @return categoryBitmap The category bitmap for the record hash
*/
function getCategoryBitmap(
uint256 scope,
bytes32 recordHash
) public view returns (bytes32 categoryBitmap) {
(bool exists, bytes32 bitmap) = scopeRecordCategories[scope].tryGet(
recordHash
);
if (!exists) {
revert RecordNotFound(scope, recordHash);
}
return bitmap;
}
/**
* @notice Returns the category bitmap for a record hash at a given index
* for a specific protocol scope
* @param scope The protocol scope identifier
* @param index The index of the record hash
* @return recordHash recordHash at the given index
* @return categoryBitmap The category bitmap for the record hash
*/
function getRecordHashAndCategoryAt(
uint256 scope,
uint256 index
) public view returns (bytes32 recordHash, bytes32 categoryBitmap) {
return scopeRecordCategories[scope].at(index);
}
/**
* @notice Return the category bitmap for a record hash
* for a specific protocol scope
# @dev does not revert if the record hash does not exist
* @param scope The protocol scope identifier
* @param recordHash The hash of the record event
* @return exists A boolean indicating if the record hash exists
* @return categoryBitmap The category bitmap for the record hash
*/
function tryGetCategoryBitmap(
uint256 scope,
bytes32 recordHash
) public view returns (bool exists, bytes32 categoryBitmap) {
return scopeRecordCategories[scope].tryGet(recordHash);
}
/**
* @notice Returns the record hashes and their categories for a specific protocol scope between
* a given range
* @param scope The protocol scope identifier
* @param from The start index of the range
* @param to The end index of the range
* @return recordHashes The record hashes for the given range
* @return categoryBitmaps The category bitmaps for the given range
*/
function getRecordHashesAndCategories(
uint256 scope,
uint256 from,
uint256 to
)
public
view
returns (
bytes32[] memory recordHashes,
bytes32[] memory categoryBitmaps
)
{
require(
from < to && to <= scopeRecordCategories[scope].length(),
"Invalid range"
);
uint256 length = to - from;
recordHashes = new bytes32[](length);
categoryBitmaps = new bytes32[](length);
for (uint256 i = 0; i < length; i++) {
(recordHashes[i], categoryBitmaps[i]) = scopeRecordCategories[scope]
.at(from + i);
}
}
/**
* @notice Returns the last record hash & it's category and the merkle-root
* for a specific protocol scope
* @param scope The protocol scope identifier
* @return root The merkle root for the protocol scope
* @return recordHash The hash of the last known record event
* @return categoryBitmap The category bitmap for the last known record event
* @return index The index of the last known record event
*/
function getLatestForScope(
uint256 scope
)
public
view
returns (
uint256 root,
bytes32 recordHash,
bytes32 categoryBitmap,
uint256 index
)
{
root = scopeRecordMerkleTrees[scope]._root();
index = scopeRecordCategories[scope].length();
if (index > 0) {
(recordHash, categoryBitmap) = scopeRecordCategories[scope].at(
index - 1
);
}
}
function _applyPredicate(
PredicateType predicateType,
bytes32 characteristicFunction,
bytes32 elementProperties
) internal pure returns (bool satisfiesPredicate) {
if (predicateType == PredicateType.Intersection) {
satisfiesPredicate =
(elementProperties & characteristicFunction) ==
characteristicFunction;
} else if (predicateType == PredicateType.Union) {
satisfiesPredicate =
(elementProperties & characteristicFunction) != 0;
} else if (predicateType == PredicateType.Complement) {
satisfiesPredicate =
(elementProperties & characteristicFunction) == 0;
}
}
function applyPredicate(
uint256 domain,
bytes32[] calldata subset,
bytes32 characteristicFunction,
PredicateType predicateType
) public view returns (bytes32[] memory elements, uint256 setCardinality) {
bytes32[] memory satisfyingElements = new bytes32[](subset.length);
for (uint256 i = 0; i < subset.length; i++) {
bytes32 element = subset[i];
(bool isMember, bytes32 elementProperties) = tryGetCategoryBitmap(
domain,
element
);
if (!isMember) {
continue;
}
if (
_applyPredicate(
predicateType,
characteristicFunction,
elementProperties
)
) {
satisfyingElements[setCardinality] = element;
setCardinality++;
}
}
assembly {
mstore(satisfyingElements, setCardinality)
}
return (satisfyingElements, setCardinality);
}
Predicate types:
0: Intersection (all bits in categoryMask must be set)
1: Union (at least one bit in categoryMask must be set)
2: Complement (no bits in categoryMask should be set)
0xBow ASP Rest API v1.0 provides a set of API endpoints to for
querying records, generating association-sets, computing proofs and querying service status.
⚠️ These endpoints are not privacy-preserving as they are provided for convenience ⚠️
Prior revisions of the ASP used binary classification to categorize records.
Initial version maintained 2 large sets of records to reflect this classification:
Inclusion Set: Record Hashes of records that passed compliance checks.
Exclusion Set: Record Hashes of records that failed compliance checks.
These sets were represented as a merkle-tree.
Any inserts or removal of record hashes would result in onchain emission of the
new merkle root.
Later versions were optimised for onchain storage of sets
to support onchain queries:
Rather than 1 large set, the sets were split into smaller sub-sets.
mtID is the unique Identifier were associated with each Subsets
which is a Hash of the tuple (chainID, contract address, set type)
This API Endpoint generates a new association-set based on the provided hashSet and hashFilter.
It was tailored for Privacy Pool to support it’s proof-of-innocence mechanism.
Latest revision of the ASP implements the categorization process per ASP specification v1.0,
where Records are classified with multiple categories mapped to 252 bits (category bitmap).
Recrod to Category Bitmap mapping is stored in the onchain Registry.
This endpoint provides a way to filter a set of record hashes
0xBow began building a PoC Association-Set Provider (ASP) in late december 2023 with the goal of luanching the
the first ASP service for Privacy Pool, a novel Zk-based privacy protocol.
Throughout Q1 of 2024, 0xBow continued to hit key milestones in the development of the ASP, proving
it’s feasibility and utility for Privacy Pool. At EthDenver,
0xBow presented a live demo on stage, showcasing the ASP’s capabilities and potentials.
In Q2 2024, progress began to enter troubling waters as difficulties surrounding Privacy Pool
delayed launch.
In late April, Chainway Labs handed over Privacy Pool to 0xBow
to ensure it’s completion. A transition that was not without it’s challenges.
In Q3 2024, 0xBow has been working tirelessly on re-implementing Privacy Pool which involves
revisions of the zk-circuits, smart contracts and the UI Webapp.
As 0xBow enters Q4 2024, 0xBow’s priorities remains unchanged:
The ASP and Privacy Pool are close to completion and will be ready for launch soon.
0xBow has excersised engineering due-dilligence during the development lifecycle of the ASP
and Privacy Pool. Compliance is not just an attribute of 0xBow’s product, but also a core value
of the organization.
0xBow is committed to the long-term development of the ASP and Privacy Pool.
0xBow’s mission is to protect the future of on-chain privacy, and provide the infrastructure necessary
to guarantee privacy as a public good.
A more detailed roadmap for the next year and beyond will be published soon.
0xBow is actively seeking to grow it’s team and is looking for talented individuals to join the team.
If you are interested in contributing to the development of the ASP and Privacy Pool, please get in touch.
0xbow ASP v1.0 go-build-kit is a set of primitive Go modules for building custom ASP solutions.
With the go-build-kit, you can easily:
Interact with exisiting ASP service
Integrate ASP modules into your protocol / DApp
Build & deploy your own custom ASP services
go-build-kit is open-source and will be available soon.
0xBow ASP v1.0 implements an extensible `Integration Framework` which offers a broad range of
functionality that can be readily integrated into custom solutions for your protocol / DApp.
0xBow ASP offers REST, gRPC and WebSocket APIs
to support offchain integration and onchain contracts for onchain integration.
If your requirements are not met by these existing APIs, you can register for a custom integration
with the ASP by following the steps below.
All integration efforts will contribute to the maturity & adoption of the ASP ❤️
A Registration is the aknowledgement of an integration request and marks
the beginning of the integration process. It is a formal step that allows both
parties (i.e 0xBow ASP and Protocol X) to track the progress of the
integration process.
To register, you will need to first to submit a new Integration Request issue in the
asp-spec-v1.0a repository.
Be sure to specify the following details in your request:
Integration Type: Protocol/dAPP Integration
Integration Target: The name of your protocol / dApp (i.e. “Protocol X”)
Involvement: What’s your involvement with the protocol / dApp? (i.e. Developer / Engineer, Founder, etc.)
Contact Information: How can we reach you? (i.e. Email, Twitter, Telegram, etc.)
Integration Description: A brief description of your protocol / dApp and the integration requirements.
0xBow has taken a modular approach to the ASP implementation, allowing for external
integrations to be made with ease.
Your protocol / DApp can leverage independent ASP services & modules
to suit your specific requirements, i.e.:
Utilize the ASP Watcher service to observe and record protocol / DApp state-transitions:
Integrate observer & state-transition recorder modules into your services.
Or subscribe to Watcher aWebSocket endpoints to receive event streams.
Utilize the ASP Categorization Engine to categorize specific events:
Subscribe to Categorization Engine WebSocket endpoints
Request the categorization of a Record via gRPC or REST API.
Integrate the categorization pipeline into your services.
Leverage the Onchain ASP Public Registry or Offchain Record Archive to support
business rules or compliant privacy-preserving mechanics
(i.e. public inputs to onchain verifier contracts).
Protocol X is planning to airdrop ERC20 tokens to a restricted set of accounts.
The conditions for the airdrop:
Account must have directly interacted with the Protocol.
Account must have a minimum balance of 1 ETH.
Account must have a minimum of 100 transactions.
Account is not directly & indirectly associated with any illicit activities.
Integration Path:
ASP generates the schema for Airdrop Eligible category with features reflecting the
specified conditions.
ASP will record all protocol interactions, categorize them and publish the category
bitmaps to the Public Registry.
ASP will deploy registry-adapter contract which contains a mapping of Account addresses &
record hashes as well as the bitmap filter for Airdrop Eligible.
The Airdrop can now integrate with the registry-adapter to ensure that only eligible
accounts receive the airdrop.
Use-case 2: Compliant ERC-4337 Paymaster
Protocol Y wishes to implement compliant ERC-4337 Paymaster
The compliance rules :
Account must have completed KYC verification.
Account’s UserOps are not associated with any illicit activities.
Integration Path::
ASP generates the schema for COMPLIANT_ACCOUNT category with features reflecting the
specified conditions.
ASP will record all protocol interactions, categorize them and publish the category
bitmaps to the Public Registry.
ASP will deploy registry-adapter contract which contains a mapping of Account addresses &
record hashes as well as the bitmap filter for COMPLIANT_ACCOUNT.
The Paymster can now integrate the registry-adpater into it’s validatePaymasterUserOp function
to ensure that only compliant accounts can interact with the Paymaster.
After submission, 0xBow will review the integration requirements, conduct workshop sessions
to plan the integration process and deliver a detailed integration plan with timelines.
Once complete, 0xBow will request for a signoff on the integration plan.
Upon signoff, the integration request will be documented in the Protocol Registry page
with links to the integration project tracking page.