Version 4.0 - The Quantum ML Revolution
π The Big Announcement
Section titled βπ The Big AnnouncementβFrom Custom Framework to Industry Standard
Section titled βFrom Custom Framework to Industry StandardβQ-Store v4.0 represents a fundamental architectural transformation Quantumβs proven patterns while maintaining our unique advantages in real quantum hardware optimization and quantum database capabilities.
Core Philosophy: βMake quantum ML as easy as classical ML, but optimized for real quantum hardware.β
This is not just an update - itβs a complete reimagining of how quantum machine learning should work in production environments.
GitHub Discussions: Share your thoughts on the v4.0 design
π― What Changes in v4.0
Section titled βπ― What Changes in v4.0β| Aspect | v3.5 (Current) | v4.0 (New) |
|---|---|---|
| API | Custom training loop | Keras/PyTorch standard API |
| Integration | Standalone framework | TensorFlow + PyTorch plugins |
| Circuits | Cirq or Qiskit | Unified representation |
| Distributed | Manual orchestration | Standard strategies (TF/PyTorch) |
| Simulation | IonQ + local | qsim + Lightning + IonQ |
| Gradients | SPSA only | Multiple methods |
| Target Users | Quantum researchers | ML practitioners + Quantum researchers |
Key Innovations (Unique to Q-Store v4.0)
Section titled βKey Innovations (Unique to Q-Store v4.0)β- π Dual-Framework Support: Both TensorFlow AND PyTorch (TFQ is TensorFlow-only)
- βοΈ IonQ Hardware Optimization: Native gates, cost tracking, queue management
- πΎ Quantum Database: Integration with Pinecone for quantum state management
- π― Smart Backend Routing: Auto-select optimal backend based on cost/performance
- π° Production Cost Optimization: Budget-aware training with automatic fallback
Performance Targets
Section titled βPerformance Targetsβ| Workload | v3.5 Actual | v4.0 Target | Method |
|---|---|---|---|
| Fashion MNIST (3 epochs) | 17.5 min | 5-7 min | qsim + optimization |
| Circuits/second | 0.57 | 3-5 | GPU acceleration |
| Multi-node scaling | N/A | 0.8-0.9 efficiency | Standard distributed training |
| IonQ hardware | N/A | 2x vs TFQ | Native gates + optimization |
π‘ The Strategy: Best of Both Worlds
Section titled βπ‘ The Strategy: Best of Both Worldsβ TensorFlow Quantum Q-Store v3.5 ββββββββββββββββ ββββββββββββ β Keras API β IonQ Native Gates β MultiWorker Scale β Cost Optimization β qsim Simulator β Multi-SDK Support β Standard Patterns β Quantum Database β β βββββββββββ¬ββββββββββββββββ β βββββββββββββββββββββββ β Q-Store v4.0 β β β β Best of Both β β Worlds β βββββββββββββββββββββββπ Whatβs New in v4.0
Section titled βπ Whatβs New in v4.0β1. Native TensorFlow & PyTorch Integration
Section titled β1. Native TensorFlow & PyTorch IntegrationβUse quantum layers just like classical layers with familiar APIs:
TensorFlow/Keras Example
Section titled βTensorFlow/Keras Exampleβimport tensorflow as tffrom q_store.tf import QuantumLayer
# Build a hybrid model with Kerasmodel = tf.keras.Sequential([ tf.keras.layers.Dense(32, activation='relu'), QuantumLayer( circuit_fn=my_quantum_circuit, n_qubits=4, backend='qsim' # GPU-accelerated simulator ), tf.keras.layers.Dense(10, activation='softmax')])
# Standard Keras compilation and trainingmodel.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10, validation_data=(x_val, y_val))PyTorch Example
Section titled βPyTorch Exampleβimport torchimport torch.nn as nnfrom q_store.torch import QuantumLayer
class HybridQNN(nn.Module): def __init__(self): super().__init__() self.classical = nn.Linear(28*28, 32) self.quantum = QuantumLayer( circuit_fn=my_quantum_circuit, n_qubits=4, backend='lightning.gpu' # PennyLane GPU simulator ) self.output = nn.Linear(4, 10)
def forward(self, x): x = torch.relu(self.classical(x)) x = self.quantum(x) return self.output(x)
# Standard PyTorch training loopmodel = HybridQNN()optimizer = torch.optim.Adam(model.parameters())criterion = nn.CrossEntropyLoss()2. Unified Circuit Representation
Section titled β2. Unified Circuit RepresentationβWrite circuits once, run anywhere:
from q_store import QuantumCircuit
# Unified circuit buildercircuit = QuantumCircuit(n_qubits=4)circuit.h(0)circuit.cx(0, 1)circuit.rx(1, param='theta')circuit.measure_all()
# Automatically converts to:# - Cirq circuits for qsim backend# - Qiskit circuits for IBM/Aer backend# - IonQ native gates for IonQ hardware# - PennyLane templates for Lightning backend3. Advanced Gradient Computation
Section titled β3. Advanced Gradient ComputationβMultiple gradient methods for different use cases:
from q_store import GradientConfig
# Parameter-shift rule (exact gradients)config = GradientConfig(method='parameter_shift')
# Finite differences (fast approximation)config = GradientConfig(method='finite_diff', epsilon=0.01)
# SPSA (high-dimensional optimization)config = GradientConfig(method='spsa', samples=100)
# Adjoint method (efficient for simulators)config = GradientConfig(method='adjoint')4. Smart Backend Routing
Section titled β4. Smart Backend RoutingβAutomatic backend selection based on workload:
from q_store import BackendRouter
router = BackendRouter( preferences={ 'cost': 0.3, # 30% weight on cost 'speed': 0.5, # 50% weight on speed 'accuracy': 0.2 # 20% weight on accuracy }, budget_limit=100.00, # Max $100 for this job fallback_strategy='simulation' # Use simulator if budget exceeded)
# Router automatically selects:# - qsim for small circuits (fast, free)# - Lightning GPU for medium circuits (very fast, free)# - IonQ simulator for testing (moderate cost)# - IonQ hardware for final runs (high cost, exact results)5. Production-Ready Distributed Training
Section titled β5. Production-Ready Distributed TrainingβTensorFlow MultiWorkerMirroredStrategy
Section titled βTensorFlow MultiWorkerMirroredStrategyβimport tensorflow as tffrom q_store.tf import QuantumLayer
# Standard TensorFlow distributed strategystrategy = tf.distribute.MultiWorkerMirroredStrategy()
with strategy.scope(): model = tf.keras.Sequential([ QuantumLayer(circuit_fn=my_circuit, n_qubits=4), tf.keras.layers.Dense(10, activation='softmax') ])
model.compile( optimizer='adam', loss='sparse_categorical_crossentropy' )
# Automatically distributes across workersmodel.fit(train_dataset, epochs=10)PyTorch DistributedDataParallel
Section titled βPyTorch DistributedDataParallelβimport torchimport torch.distributed as distfrom torch.nn.parallel import DistributedDataParallel as DDPfrom q_store.torch import QuantumLayer
# Standard PyTorch distributed setupdist.init_process_group(backend='nccl')
model = HybridQNN().to(device)model = DDP(model, device_ids=[local_rank])
# Standard distributed trainingfor epoch in range(num_epochs): for batch in train_loader: optimizer.zero_grad() loss = criterion(model(batch), labels) loss.backward() optimizer.step()6. Enhanced Quantum Database Integration
Section titled β6. Enhanced Quantum Database IntegrationβPersistent quantum state management with improved performance:
from q_store import QuantumStateDB
db = QuantumStateDB( provider='pinecone', index_name='quantum_states', # New in v4.0: Automatic compression compression='zstd', compression_level=3)
# Store quantum states with metadatadb.store_state( state_id='training_checkpoint_epoch_5', state_vector=quantum_state, metadata={ 'epoch': 5, 'loss': 0.342, 'accuracy': 0.876, 'circuit_depth': 12 })
# Similarity search for quantum statessimilar_states = db.find_similar( query_state=current_state, top_k=5, filter={'accuracy': {'$gte': 0.85}})7. IonQ Native Gate Optimization
Section titled β7. IonQ Native Gate OptimizationβOptimized compilation for IonQ hardware:
from q_store.ionq import IonQOptimizer
optimizer = IonQOptimizer( target_hardware='ionq.aria', optimization_level=3, # New in v4.0: Cost-aware gate selection minimize_cost=True, max_cost_per_shot=0.01)
# Automatically converts to native gatesoptimized_circuit = optimizer.optimize(circuit)
# Detailed cost estimation before executioncost_estimate = optimizer.estimate_cost( circuit=optimized_circuit, shots=1000)
print(f"Estimated cost: ${cost_estimate.total_cost:.2f}")print(f"Gate count: {cost_estimate.native_gate_count}")print(f"Circuit depth: {cost_estimate.optimized_depth}")8. GPU-Accelerated Simulation
Section titled β8. GPU-Accelerated SimulationβLeverage modern GPU simulators for massive speedups:
from q_store import SimulatorConfig
# qsim (Google's fast simulator)config = SimulatorConfig( backend='qsim', device='GPU', precision='single' # Faster, 32-bit precision)
# PennyLane Lightning (highly optimized)config = SimulatorConfig( backend='lightning.gpu', device='cuda:0', batch_size=256 # Process 256 circuits in parallel)
# Automatic selection based on circuit sizeconfig = SimulatorConfig( backend='auto', prefer_gpu=True, # Use GPU for >12 qubits, CPU otherwise gpu_threshold=12)9. Native TensorBoard Support
Section titled β9. Native TensorBoard SupportβFull integration with TensorBoard for experiment tracking:
from q_store.tf import QuantumLayerimport tensorflow as tf
# TensorBoard automatically tracks quantum metricstensorboard_callback = tf.keras.callbacks.TensorBoard( log_dir='./logs', histogram_freq=1)
model.fit( x_train, y_train, epochs=10, callbacks=[tensorboard_callback])
# View quantum-specific metrics in TensorBoard:# - Circuit execution time per batch# - Gradient estimation variance# - Backend utilization# - Cost per epoch (for hardware backends)10. Advanced Cost Optimization
Section titled β10. Advanced Cost OptimizationβBudget-aware training with automatic fallback:
from q_store import CostOptimizer
optimizer = CostOptimizer( total_budget=500.00, # $500 total budget daily_limit=50.00, # Max $50 per day strategy='adaptive', # Adjust based on progress
# New in v4.0: Intelligent fallback chain fallback_chain=[ {'backend': 'ionq.aria', 'max_cost_per_batch': 1.00}, {'backend': 'ionq.simulator', 'max_cost_per_batch': 0.10}, {'backend': 'lightning.gpu', 'max_cost_per_batch': 0.00}, {'backend': 'qsim', 'max_cost_per_batch': 0.00} ])
# Training automatically switches backends when budget exceededtrainer = QuantumTrainer( model=model, cost_optimizer=optimizer, auto_checkpoint=True # Checkpoint before switching backends)
trainer.fit(x_train, y_train, epochs=20)
# Detailed cost reportprint(optimizer.get_cost_report())π Migration from v3.5
Section titled βπ Migration from v3.5βBreaking Changes
Section titled βBreaking Changesβ1. Training Loop API
Section titled β1. Training Loop APIβv3.5:
from q_store import QuantumTrainer, TrainingConfig
config = TrainingConfig( learning_rate=0.01, epochs=10, backend='ionq')
trainer = QuantumTrainer(config)trainer.train(model, x_train, y_train)v4.0:
import tensorflow as tffrom q_store.tf import QuantumLayer
# Use standard Keras APImodel = tf.keras.Sequential([ QuantumLayer(circuit_fn=my_circuit, n_qubits=4)])
model.compile(optimizer='adam', loss='mse')model.fit(x_train, y_train, epochs=10)2. Circuit Definition
Section titled β2. Circuit Definitionβv3.5:
from q_store import QuantumCircuit
circuit = QuantumCircuit(backend='cirq')circuit.add_gate('H', 0)circuit.add_gate('CNOT', [0, 1])v4.0:
from q_store import QuantumCircuit
# Backend-agnostic circuitcircuit = QuantumCircuit(n_qubits=4)circuit.h(0)circuit.cx(0, 1)
# Backend selected at execution time3. Backend Configuration
Section titled β3. Backend Configurationβv3.5:
config = TrainingConfig( backend='ionq', ionq_api_key='your_key')v4.0:
from q_store import configure
# Global configurationconfigure( ionq_api_key='your_key', default_backend='auto', # Smart selection cache_dir='~/.qstore/cache')Migration Checklist
Section titled βMigration Checklistβ- Update import statements (
q_store.tforq_store.torch) - Replace
QuantumTrainerwith Keras/PyTorch training loops - Update circuit definitions to unified API
- Configure backends using new
configure()function - Update distributed training to use TF/PyTorch strategies
- Review gradient computation methods (new options available)
- Test with
qsimorlightning.gpusimulators (recommended) - Update monitoring to use TensorBoard instead of custom logging
π― Why This Matters
Section titled βπ― Why This MattersβFor ML Practitioners
Section titled βFor ML Practitionersβ- Zero Learning Curve: If you know Keras or PyTorch, you already know Q-Store v4.0
- Standard Tooling: Use familiar tools like TensorBoard, distributed strategies, and callbacks
- Production Ready: Deploy quantum models using the same infrastructure as classical models
For Quantum Researchers
Section titled βFor Quantum Researchersβ- Hardware Optimization: Best-in-class IonQ native gate compilation
- Multi-Backend: Seamlessly switch between simulators and real hardware
- Cost Control: Run experiments within budget constraints
- State Management: Persist and analyze quantum states at scale
For Organizations
Section titled βFor Organizationsβ-
Reduced Risk: Built on proven TensorFlow and PyTorch architectures
-
Cost Transparency: Track quantum computing costs alongside traditional infrastructure
-
Scalability: Leverage existing distributed training infrastructure
-
Feature Requests: Suggest features for the final release
-
Early Testing: Join the beta program (weeks 7-8)
-
Documentation: Help improve examples and guides
Ready for the future of quantum machine learning? Star us on GitHub and join the discussion!