Quick Start
Get up and running with Q-Store v4.1.1 with async quantum execution - no API keys required for development!
Installation
Section titled “Installation”# Latest version with async execution and quantum-first architecturepip install q-store==4.1.1
# With async support (recommended)pip install q-store[async]==4.1.1
# Full installation (all backends)pip install q-store[all]==4.1.1Requirements: Python 3.11+
Your First Quantum Circuit
Section titled “Your First Quantum Circuit”from q_store import QuantumCircuit
# Create a simple quantum circuitcircuit = QuantumCircuit(n_qubits=2)circuit.h(0) # Hadamard gate on qubit 0circuit.cnot(0, 1) # CNOT gate (control=0, target=1)
# Simulate the circuitresult = circuit.simulate()print(result)Run it:
python examples/basic_usage.pyHybrid Quantum-Classical ML (v4.1.1)
Section titled “Hybrid Quantum-Classical ML (v4.1.1)”Quantum-First Architecture (NEW!)
Section titled “Quantum-First Architecture (NEW!)”import asynciofrom q_store.layers import ( QuantumFeatureExtractor, QuantumPooling, QuantumReadout)from q_store.runtime import AsyncQuantumExecutor
# Build quantum-first model (70% quantum!)model = Sequential([ Flatten(), QuantumFeatureExtractor(n_qubits=8, depth=4, backend='ionq'), QuantumPooling(n_qubits=4), QuantumFeatureExtractor(n_qubits=4, depth=3), QuantumReadout(n_qubits=4, n_classes=10)])
# Async training loop (non-blocking!)async def train_model(): for epoch in range(10): for batch_x, batch_y in train_loader: # Async forward pass (never blocks!) predictions = await model.forward_async(batch_x)
loss = criterion(predictions, batch_y) gradients = await model.backward_async(loss) optimizer.step(gradients)
print(f"Epoch {epoch}, Loss: {loss.item():.4f}")
# Run async trainingasyncio.run(train_model())PyTorch Integration (Fixed in v4.1.1!)
Section titled “PyTorch Integration (Fixed in v4.1.1!)”import torchimport torch.nn as nnfrom q_store.torch import QuantumLayer
# Build hybrid modelmodel = nn.Sequential( nn.Linear(784, 16), QuantumLayer(n_qubits=8, depth=4, backend='ionq'), # Now with async! nn.Linear(24, 10) # 8 qubits × 3 bases = 24 features)
# Train like any PyTorch modeloptimizer = torch.optim.Adam(model.parameters(), lr=0.01)criterion = nn.CrossEntropyLoss()
for epoch in range(10): output = model(input_data) loss = criterion(output, labels) loss.backward() # Quantum gradients via SPSA optimizer.step()Run it:
# Mock mode (instant, free, for development)python examples/pytorch/fashion_mnist.py --samples 500 --epochs 2
# IonQ Simulator (real API, free, ~38 min for 1K images)python examples/pytorch/cats_vs_dogs.py --no-mock --samples 1000 --epochs 5
# Note: Classical GPU training is 183-457× faster for production# Use quantum for: research, small datasets, algorithm developmentQuantum-Enhanced Database
Section titled “Quantum-Enhanced Database”Basic Setup
Section titled “Basic Setup”from q_store import QuantumDatabase, DatabaseConfigfrom q_store.runtime import AsyncQuantumExecutor
# Mock mode - no API keys neededconfig = DatabaseConfig( quantum_sdk='mock', # Use mock quantum backend enable_quantum=True, enable_superposition=True, max_concurrent=100, # NEW in v4.1: async execution batch_size=20 # NEW in v4.1: circuit batching)
db = QuantumDatabase(config)1. Insert with Superposition
Section titled “1. Insert with Superposition”Store vectors in multiple contexts simultaneously:
import numpy as np
# Store document in superpositiondb.insert( id='doc_123', vector=np.random.rand(768), # Your embedding contexts=[ ('technical', 0.7), # 70% weight ('business', 0.2), # 20% weight ('legal', 0.1) # 10% weight ])2. Context-Aware Queries
Section titled “2. Context-Aware Queries”Query collapses superposition to specific context:
# Query for technical contextresults = db.query( vector=query_embedding, context='technical', # Collapses to technical context top_k=10)3. Entangle Related Entities
Section titled “3. Entangle Related Entities”Create automatic correlation updates:
# Entangle related documentsdb.create_entangled_group( group_id='tech_docs', entity_ids=['doc_1', 'doc_2', 'doc_3'], correlation_strength=0.85)
# Update one - others automatically adjustdb.update('doc_1', new_embedding)# doc_2 and doc_3 automatically reflect correlation!4. Quantum Tunneling Search
Section titled “4. Quantum Tunneling Search”Find globally optimal matches:
# Classical search (local optimum)classical_results = db.query( vector=query, enable_tunneling=False, top_k=10)
# Quantum tunneling (global search)quantum_results = db.tunnel_search( query=query, barrier_threshold=0.7, tunneling_strength=0.6, top_k=10)# Finds patterns classical search misses!5. Automatic Decoherence
Section titled “5. Automatic Decoherence”Physics-based time-to-live:
# Recent data - long coherencedb.insert( id='recent_doc', vector=embedding, coherence_time=86400000 # 24 hours)
# Old data - natural decaydb.insert( id='old_doc', vector=old_embedding, coherence_time=3600000 # 1 hour)
# Cleanup happens automaticallydb.apply_decoherence()6. Zero-Blocking Storage (NEW in v4.1.1!)
Section titled “6. Zero-Blocking Storage (NEW in v4.1.1!)”Async checkpoints and metrics:
from q_store.storage import AsyncMetricsLogger, CheckpointManager
# Async metrics (never blocks training!)metrics = AsyncMetricsLogger('experiments/run_001/metrics.parquet')await metrics.log({ 'epoch': 1, 'loss': 0.342, 'circuit_time_ms': 107, 'cost_usd': 0.0})
# Async checkpoints (compressed Zarr)checkpoints = CheckpointManager('experiments/run_001/checkpoints')await checkpoints.save( epoch=10, model_state=model.state_dict(), optimizer_state=optimizer.state_dict())Production Setup (Optional)
Section titled “Production Setup (Optional)”For production with real quantum backend and persistent storage:
1. Create .env File
Section titled “1. Create .env File”IONQ_API_KEY=your_ionq_api_keyPINECONE_API_KEY=your_pinecone_api_keyPINECONE_ENVIRONMENT=us-east-12. Configure Database
Section titled “2. Configure Database”from q_store import QuantumDatabase, DatabaseConfig
config = DatabaseConfig( # Quantum backend quantum_sdk='ionq', ionq_api_key='your-ionq-key', quantum_target='simulator', # or 'qpu' for real hardware
# Classical storage pinecone_api_key='your-pinecone-key', pinecone_environment='us-east-1', pinecone_index='quantum-vectors',
# Features enable_quantum=True, enable_superposition=True, enable_entanglement=True, enable_tunneling=True,
# Performance n_qubits=8, circuit_depth=4)
db = QuantumDatabase(config)Complete Example: Document Search
Section titled “Complete Example: Document Search”from q_store import QuantumDatabase, DatabaseConfigimport numpy as np
# Setup database (mock mode)config = DatabaseConfig( quantum_sdk='mock', enable_quantum=True, enable_superposition=True, enable_tunneling=True)
db = QuantumDatabase(config)
# Store documents with multiple contextsdocuments = [ { 'id': 'doc_1', 'vector': np.random.rand(768), 'contexts': [('tech', 0.8), ('business', 0.2)] }, { 'id': 'doc_2', 'vector': np.random.rand(768), 'contexts': [('business', 0.7), ('legal', 0.3)] }, { 'id': 'doc_3', 'vector': np.random.rand(768), 'contexts': [('tech', 0.6), ('legal', 0.4)] }]
# Insert all documentsfor doc in documents: db.insert( id=doc['id'], vector=doc['vector'], contexts=doc['contexts'] )
# Entangle related documentsdb.create_entangled_group( group_id='related_docs', entity_ids=['doc_1', 'doc_3'], # Both have tech context correlation_strength=0.85)
# Query with contextquery_vector = np.random.rand(768)
# Technical context querytech_results = db.query( vector=query_vector, context='tech', top_k=5)print(f"Technical results: {[r.id for r in tech_results]}")
# Business context querybusiness_results = db.query( vector=query_vector, context='business', top_k=5)print(f"Business results: {[r.id for r in business_results]}")
# Quantum tunneling for diverse resultsdiverse_results = db.tunnel_search( query=query_vector, barrier_threshold=0.7, tunneling_strength=0.6, top_k=5)print(f"Diverse results: {[r.id for r in diverse_results]}")Performance Characteristics (v4.1.1)
Section titled “Performance Characteristics (v4.1.1)”Real-World Data (Cats vs Dogs, 1,000 images, 5 epochs):
| Backend | Training Time | Accuracy | Cost | Speedup | Use Case |
|---|---|---|---|---|---|
| NVIDIA H100 | 5s | 60-70% | $0.009 | 457× faster | Production |
| NVIDIA A100 | 7.5s | 60-70% | $0.010 | 305× faster | Production |
| IonQ Simulator (v4.1) | 38.1 min | 58.5% | $0 (free!) | Baseline | Research |
| IonQ Aria QPU | ~45-60 min | 60-75% | $1,152 | 0.15× | Research only |
| Mock (v4.1) | Instant | 10-20% | Free | N/A | Development |
Key Insights:
- ✅ v4.1.1 is 10-20× faster than v4.0 (async execution)
- ⚠️ Classical GPUs are 183-457× faster than quantum for large datasets
- ✅ Free IonQ simulator perfect for research and algorithm development
- ✅ Quantum excels: small datasets (<1K samples), non-convex optimization, research
- ⚠️ Use classical GPUs for: production, large datasets, time-critical applications
Running Examples
Section titled “Running Examples”Available Examples (v4.1.1)
Section titled “Available Examples (v4.1.1)”# Basic quantum circuitpython examples/basic_usage.py
# Async PyTorch model (Fashion MNIST)python examples/pytorch/fashion_mnist_async.py --samples 500 --epochs 2
# Real-world benchmark (Cats vs Dogs)python examples/pytorch/cats_vs_dogs.py --samples 1000 --epochs 5
# Database operations with async storagepython examples/database_async_demo.py
# Quantum-first architecture demopython examples/quantum_first_demo.py
# Performance comparison (Quantum vs Classical)python examples/performance_comparison.pyWith Real IonQ Simulator (Free!)
Section titled “With Real IonQ Simulator (Free!)”# Set up .env file with IONQ_API_KEYpython examples/pytorch/cats_vs_dogs.py --no-mock --samples 1000 --epochs 5
# Expected: ~38 minutes (vs 7.5s for GPU)# Cost: $0 (simulator is free!)# Perfect for research and algorithm developmentTips for Success
Section titled “Tips for Success”- Start with Mock Mode: Develop and test without API keys (instant, free)
- Use IonQ Simulator for Research: Free, unlimited experimentation
- Understand Performance: GPUs are 183-457× faster for production training
- Quantum Use Cases: Research, small datasets (<1K), algorithm development, non-convex optimization
- Use Async Execution: Enable async for 10-20× speedup over v4.0
- Optimal Qubit Counts: 8 qubits recommended for v4.1.1 (was 4-8 in v4.0)
- Monitor Costs: Real QPU costs $1,152-$4,480 per training run
- Network Latency: 55% of execution time with cloud IonQ
- Energy Efficiency: Quantum uses 5-8× less power than GPU (50-80W vs 400W)
- Check Examples: All examples updated for v4.1.1 async architecture
Getting Help
Section titled “Getting Help”- Issues: GitHub Issues
- Documentation: Q-Store Docs
- Examples:
examples/directory in repository