Data & Tools
Open research resources available to the research community. All datasets, code repositories, and methodologies are freely accessible under permissive licenses (MIT for software, CC-BY-4.0 for data and documentation).
Datasets
Research datasets from the Adversarial Systems Research program. All datasets are archived on Zenodo with persistent DOIs and comprehensive documentation.
Consensual Sovereignty Monte Carlo Simulation Results
Complete simulation outputs from 1,000 Monte Carlo runs × 50 periods testing four dynamic mechanisms (Adversarial Representation, Deliberative Polling, Quadratic Weighted Voice, Asymmetric Consent Thresholds). Includes convergence metrics, friction reduction trajectories, and consent alignment evolution.
Cryptocurrency Event Study Data (2019-2025)
High-frequency price and volatility data for 6 cryptocurrencies across 50 infrastructure disruption and regulatory events. Includes GDELT sentiment indices, TARCH-X model outputs, and network contagion metrics.
Gradient Descent Training Data Experiments
PyTorch experimental results demonstrating gradient cascades, weight instability, and catastrophic forgetting under adversarial training conditions. Synthetic datasets illustrating four categories of developmental "training data problems."
Code Repositories
Open-source implementations of research methodologies. All code is MIT-licensed and includes comprehensive documentation, environment specifications, and replication instructions.
Consensual Sovereignty Simulation Framework
Python implementation of Monte Carlo simulations for four dynamic consent mechanisms. Includes legitimacy calculators, friction quantification, and convergence analysis tools.
Stack: Python, NumPy, Pandas, Matplotlib, Jupyter
TARCH-X Volatility Modeling Pipeline
Custom maximum likelihood estimation implementation for TARCH-X models with decomposed GDELT sentiment indices. Includes event classification, network analysis, and visualization dashboards.
Stack: Python, NumPy, SciPy, statsmodels, NetworkX, Plotly
Gradient Descent Training Data Framework
PyTorch implementation of adversarial training experiments demonstrating gradient cascades, weight instability, and catastrophic forgetting. Includes synthetic dataset generators and visualization tools.
Stack: Python, PyTorch, NumPy, Matplotlib
Additional research tools and experimental code available on GitHub. Check the organization page for utility libraries, data processing pipelines, and proof-of-concept implementations.
Research Methodologies
Methodological approaches employed across the Adversarial Systems Research program, emphasizing transparency, reproducibility, and substrate-independent collaboration.
Multi-Agent Peer Review
All papers undergo multi-agent peer review prior to publication. This process deploys specialized large language model agents configured with domain expertise (finance economics, political economy, computational philosophy) to provide systematic critique across theoretical rigor, empirical evidence quality, methodological soundness, and argumentative coherence.
Agent reviewers operate under explicit epistemic constraints matching their knowledge cutoffs, evaluate papers against disciplinary standards for their respective fields, and generate comprehensive review reports with scored assessments and actionable feedback. This approach complements traditional human peer review by providing rapid iteration cycles, identifying technical gaps early in the research process, and maintaining consistent evaluation criteria across heterogeneous research domains.
Review transcripts are archived and available upon request. The methodology is applied systematically to working papers before Zenodo submission, ensuring baseline quality standards are met prior to public release.
AI-Augmented Research
Research production leverages a tiered AI collaboration architecture: Claude Code (Anthropic) for strategic orchestration, complex reasoning, and architectural decisions; Gemini Pro for boilerplate generation, documentation drafting, and large-context analysis; Perplexity for bleeding-edge literature discovery beyond training cutoffs. This multi-model approach optimizes for cost efficiency (delegating grunt work to cheaper tiers) while maintaining rigorous analytical standards for substantive contributions.
The collaboration philosophy treats AI systems as cognitive scaffolding and intellectual sounding boards, not autonomous generators. All theoretical frameworks, empirical interpretations, argumentative claims, and policy proposals originate from human reasoning. AI tools accelerate mechanical tasks (reference formatting, literature search, structural organization) and provide iterative feedback on clarity and coherence, but substantive intellectual contributions remain the researcher's domain.
This workflow reflects substrate-independent collaboration principles: the cognitive architecture performing research tasks (biological neurons vs. artificial neural networks) is less relevant than the quality of reasoning, rigor of methodology, and validity of conclusions. Disclosure statements in all publications document AI tool usage with full transparency.
Tools employed: Claude Code (Anthropic Claude Sonnet 4.5), Gemini Pro (Google), Perplexity AI, LM Studio (local inference), GitHub Copilot (code completion).
Computational Reproducibility
All computational research outputs include open-source code repositories, interactive visualization dashboards, and complete replication materials. Papers with quantitative components (cryptocurrency event studies, TARCH-X volatility models, Monte Carlo simulations) provide executable code, documented data processing pipelines, and environment specifications enabling full replication.
Interactive dashboards extend traditional static PDF figures by enabling exploratory data analysis, parameter sensitivity testing, and visual investigation of empirical results. Current implementations include cryptocurrency volatility dashboards (TARCH-X model outputs, event impact visualization) with planned expansions to trauma gradient descent visualizations, personality embedding explorations, and consent-friction calculators.
All code is MIT-licensed (software) or CC-BY-4.0 (documentation), hosted on GitHub, and linked directly from paper landing pages. This commitment to computational transparency ensures findings are verifiable, extendable, and usable by other researchers.
Version Control & Open Science
Research outputs employ semantic versioning (MAJOR.MINOR.PATCH) tracked through Zenodo's version control system. Papers evolve through iterative releases: undergraduate theses become expanded preprints, working papers incorporate peer feedback, and published versions receive post-publication corrections or extensions. Each version receives a distinct DOI while maintaining linkage to the canonical work, ensuring citability across the research lifecycle.
This approach treats academic papers as living documents that improve through community engagement, transparent iteration, and cumulative refinement. Changelogs document substantive additions, methodological improvements, and evidence updates between versions. The Farzulla Research Zenodo community aggregates all outputs, providing centralized access to the complete research program.
Three-Tier Publication Model
Each research output is disseminated through three complementary channels optimized for different use cases:
This architecture balances formal academic requirements (citable DOI, archival permanence) with modern scientific communication (interactive exploration, code transparency, extensibility). Interactive dashboards improve discoverability through SEO, demonstrate technical competence, and provide superior educational value compared to static figures.
Computational Infrastructure
Resurrexi Labs
Large-scale computational experiments, Monte Carlo simulations, and autonomous agent research conducted at Resurrexi Labs — our distributed computing research division operating a 7-node Kubernetes cluster for offensive security testing, multi-agent AI systems, and high-performance computational modeling.
Cluster Specifications
Heterogeneous compute environment ranging from Ryzen 9 9900X (12C/24T, DDR5) to legacy Intel Celeron nodes, dual AMD GPUs (7900 XTX + 7800 XT), distributed storage across ~8TB, running K3s (lightweight Kubernetes) on Arch Linux.
Research Capabilities
- ▸ Monte Carlo Simulations: Parallel execution across distributed worker nodes for large-scale statistical experiments
- ▸ Autonomous Security Agents: Offensive/defensive AI testing in isolated vulnerable environments
- ▸ Local LLM Inference: On-premise model deployment for sensitive research (LM Studio, vLLM)
- ▸ Containerized Workflows: Reproducible research environments via Docker/Kubernetes
- ▸ Network Analysis: Graph modeling, contagion analysis, systemic risk propagation studies
Compute Partnerships
Resurrexi Labs infrastructure is available for collaborative research projects requiring distributed computing, autonomous agent testing, or privacy-preserving local inference. Open to academic partnerships, proof-of-concept implementations, and interdisciplinary computational experiments.
For technical specifications, collaboration inquiries, or infrastructure documentation, visit resurrexi.dev or contact: labs@farzulla.org