Skip to content

Version 4.0 - The Quantum ML Revolution

Q-Store v4.0 represents a fundamental architectural transformation Quantum’s proven patterns while maintaining our unique advantages in real quantum hardware optimization and quantum database capabilities.

Core Philosophy: β€œMake quantum ML as easy as classical ML, but optimized for real quantum hardware.”

This is not just an update - it’s a complete reimagining of how quantum machine learning should work in production environments.

GitHub Discussions: Share your thoughts on the v4.0 design Discussions

Aspectv3.5 (Current)v4.0 (New)
APICustom training loopKeras/PyTorch standard API
IntegrationStandalone frameworkTensorFlow + PyTorch plugins
CircuitsCirq or QiskitUnified representation
DistributedManual orchestrationStandard strategies (TF/PyTorch)
SimulationIonQ + localqsim + Lightning + IonQ
GradientsSPSA onlyMultiple methods
Target UsersQuantum researchersML practitioners + Quantum researchers
  1. πŸ”„ Dual-Framework Support: Both TensorFlow AND PyTorch (TFQ is TensorFlow-only)
  2. βš›οΈ IonQ Hardware Optimization: Native gates, cost tracking, queue management
  3. πŸ’Ύ Quantum Database: Integration with Pinecone for quantum state management
  4. 🎯 Smart Backend Routing: Auto-select optimal backend based on cost/performance
  5. πŸ’° Production Cost Optimization: Budget-aware training with automatic fallback
Workloadv3.5 Actualv4.0 TargetMethod
Fashion MNIST (3 epochs)17.5 min5-7 minqsim + optimization
Circuits/second0.573-5GPU acceleration
Multi-node scalingN/A0.8-0.9 efficiencyStandard distributed training
IonQ hardwareN/A2x vs TFQNative gates + optimization
TensorFlow Quantum Q-Store v3.5
──────────────── ────────────
βœ“ Keras API βœ“ IonQ Native Gates
βœ“ MultiWorker Scale βœ“ Cost Optimization
βœ“ qsim Simulator βœ“ Multi-SDK Support
βœ“ Standard Patterns βœ“ Quantum Database
↓ ↓
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Q-Store v4.0 β”‚
β”‚ β”‚
β”‚ Best of Both β”‚
β”‚ Worlds β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Use quantum layers just like classical layers with familiar APIs:

import tensorflow as tf
from q_store.tf import QuantumLayer
# Build a hybrid model with Keras
model = tf.keras.Sequential([
tf.keras.layers.Dense(32, activation='relu'),
QuantumLayer(
circuit_fn=my_quantum_circuit,
n_qubits=4,
backend='qsim' # GPU-accelerated simulator
),
tf.keras.layers.Dense(10, activation='softmax')
])
# Standard Keras compilation and training
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
model.fit(x_train, y_train, epochs=10, validation_data=(x_val, y_val))
import torch
import torch.nn as nn
from q_store.torch import QuantumLayer
class HybridQNN(nn.Module):
def __init__(self):
super().__init__()
self.classical = nn.Linear(28*28, 32)
self.quantum = QuantumLayer(
circuit_fn=my_quantum_circuit,
n_qubits=4,
backend='lightning.gpu' # PennyLane GPU simulator
)
self.output = nn.Linear(4, 10)
def forward(self, x):
x = torch.relu(self.classical(x))
x = self.quantum(x)
return self.output(x)
# Standard PyTorch training loop
model = HybridQNN()
optimizer = torch.optim.Adam(model.parameters())
criterion = nn.CrossEntropyLoss()

Write circuits once, run anywhere:

from q_store import QuantumCircuit
# Unified circuit builder
circuit = QuantumCircuit(n_qubits=4)
circuit.h(0)
circuit.cx(0, 1)
circuit.rx(1, param='theta')
circuit.measure_all()
# Automatically converts to:
# - Cirq circuits for qsim backend
# - Qiskit circuits for IBM/Aer backend
# - IonQ native gates for IonQ hardware
# - PennyLane templates for Lightning backend

Multiple gradient methods for different use cases:

from q_store import GradientConfig
# Parameter-shift rule (exact gradients)
config = GradientConfig(method='parameter_shift')
# Finite differences (fast approximation)
config = GradientConfig(method='finite_diff', epsilon=0.01)
# SPSA (high-dimensional optimization)
config = GradientConfig(method='spsa', samples=100)
# Adjoint method (efficient for simulators)
config = GradientConfig(method='adjoint')

Automatic backend selection based on workload:

from q_store import BackendRouter
router = BackendRouter(
preferences={
'cost': 0.3, # 30% weight on cost
'speed': 0.5, # 50% weight on speed
'accuracy': 0.2 # 20% weight on accuracy
},
budget_limit=100.00, # Max $100 for this job
fallback_strategy='simulation' # Use simulator if budget exceeded
)
# Router automatically selects:
# - qsim for small circuits (fast, free)
# - Lightning GPU for medium circuits (very fast, free)
# - IonQ simulator for testing (moderate cost)
# - IonQ hardware for final runs (high cost, exact results)
import tensorflow as tf
from q_store.tf import QuantumLayer
# Standard TensorFlow distributed strategy
strategy = tf.distribute.MultiWorkerMirroredStrategy()
with strategy.scope():
model = tf.keras.Sequential([
QuantumLayer(circuit_fn=my_circuit, n_qubits=4),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy'
)
# Automatically distributes across workers
model.fit(train_dataset, epochs=10)
import torch
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
from q_store.torch import QuantumLayer
# Standard PyTorch distributed setup
dist.init_process_group(backend='nccl')
model = HybridQNN().to(device)
model = DDP(model, device_ids=[local_rank])
# Standard distributed training
for epoch in range(num_epochs):
for batch in train_loader:
optimizer.zero_grad()
loss = criterion(model(batch), labels)
loss.backward()
optimizer.step()

Persistent quantum state management with improved performance:

from q_store import QuantumStateDB
db = QuantumStateDB(
provider='pinecone',
index_name='quantum_states',
# New in v4.0: Automatic compression
compression='zstd',
compression_level=3
)
# Store quantum states with metadata
db.store_state(
state_id='training_checkpoint_epoch_5',
state_vector=quantum_state,
metadata={
'epoch': 5,
'loss': 0.342,
'accuracy': 0.876,
'circuit_depth': 12
}
)
# Similarity search for quantum states
similar_states = db.find_similar(
query_state=current_state,
top_k=5,
filter={'accuracy': {'$gte': 0.85}}
)

Optimized compilation for IonQ hardware:

from q_store.ionq import IonQOptimizer
optimizer = IonQOptimizer(
target_hardware='ionq.aria',
optimization_level=3,
# New in v4.0: Cost-aware gate selection
minimize_cost=True,
max_cost_per_shot=0.01
)
# Automatically converts to native gates
optimized_circuit = optimizer.optimize(circuit)
# Detailed cost estimation before execution
cost_estimate = optimizer.estimate_cost(
circuit=optimized_circuit,
shots=1000
)
print(f"Estimated cost: ${cost_estimate.total_cost:.2f}")
print(f"Gate count: {cost_estimate.native_gate_count}")
print(f"Circuit depth: {cost_estimate.optimized_depth}")

Leverage modern GPU simulators for massive speedups:

from q_store import SimulatorConfig
# qsim (Google's fast simulator)
config = SimulatorConfig(
backend='qsim',
device='GPU',
precision='single' # Faster, 32-bit precision
)
# PennyLane Lightning (highly optimized)
config = SimulatorConfig(
backend='lightning.gpu',
device='cuda:0',
batch_size=256 # Process 256 circuits in parallel
)
# Automatic selection based on circuit size
config = SimulatorConfig(
backend='auto',
prefer_gpu=True,
# Use GPU for >12 qubits, CPU otherwise
gpu_threshold=12
)

Full integration with TensorBoard for experiment tracking:

from q_store.tf import QuantumLayer
import tensorflow as tf
# TensorBoard automatically tracks quantum metrics
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir='./logs',
histogram_freq=1
)
model.fit(
x_train, y_train,
epochs=10,
callbacks=[tensorboard_callback]
)
# View quantum-specific metrics in TensorBoard:
# - Circuit execution time per batch
# - Gradient estimation variance
# - Backend utilization
# - Cost per epoch (for hardware backends)

Budget-aware training with automatic fallback:

from q_store import CostOptimizer
optimizer = CostOptimizer(
total_budget=500.00, # $500 total budget
daily_limit=50.00, # Max $50 per day
strategy='adaptive', # Adjust based on progress
# New in v4.0: Intelligent fallback chain
fallback_chain=[
{'backend': 'ionq.aria', 'max_cost_per_batch': 1.00},
{'backend': 'ionq.simulator', 'max_cost_per_batch': 0.10},
{'backend': 'lightning.gpu', 'max_cost_per_batch': 0.00},
{'backend': 'qsim', 'max_cost_per_batch': 0.00}
]
)
# Training automatically switches backends when budget exceeded
trainer = QuantumTrainer(
model=model,
cost_optimizer=optimizer,
auto_checkpoint=True # Checkpoint before switching backends
)
trainer.fit(x_train, y_train, epochs=20)
# Detailed cost report
print(optimizer.get_cost_report())

v3.5:

from q_store import QuantumTrainer, TrainingConfig
config = TrainingConfig(
learning_rate=0.01,
epochs=10,
backend='ionq'
)
trainer = QuantumTrainer(config)
trainer.train(model, x_train, y_train)

v4.0:

import tensorflow as tf
from q_store.tf import QuantumLayer
# Use standard Keras API
model = tf.keras.Sequential([
QuantumLayer(circuit_fn=my_circuit, n_qubits=4)
])
model.compile(optimizer='adam', loss='mse')
model.fit(x_train, y_train, epochs=10)

v3.5:

from q_store import QuantumCircuit
circuit = QuantumCircuit(backend='cirq')
circuit.add_gate('H', 0)
circuit.add_gate('CNOT', [0, 1])

v4.0:

from q_store import QuantumCircuit
# Backend-agnostic circuit
circuit = QuantumCircuit(n_qubits=4)
circuit.h(0)
circuit.cx(0, 1)
# Backend selected at execution time

v3.5:

config = TrainingConfig(
backend='ionq',
ionq_api_key='your_key'
)

v4.0:

from q_store import configure
# Global configuration
configure(
ionq_api_key='your_key',
default_backend='auto', # Smart selection
cache_dir='~/.qstore/cache'
)
  • Update import statements (q_store.tf or q_store.torch)
  • Replace QuantumTrainer with Keras/PyTorch training loops
  • Update circuit definitions to unified API
  • Configure backends using new configure() function
  • Update distributed training to use TF/PyTorch strategies
  • Review gradient computation methods (new options available)
  • Test with qsim or lightning.gpu simulators (recommended)
  • Update monitoring to use TensorBoard instead of custom logging
  • Zero Learning Curve: If you know Keras or PyTorch, you already know Q-Store v4.0
  • Standard Tooling: Use familiar tools like TensorBoard, distributed strategies, and callbacks
  • Production Ready: Deploy quantum models using the same infrastructure as classical models
  • Hardware Optimization: Best-in-class IonQ native gate compilation
  • Multi-Backend: Seamlessly switch between simulators and real hardware
  • Cost Control: Run experiments within budget constraints
  • State Management: Persist and analyze quantum states at scale
  • Reduced Risk: Built on proven TensorFlow and PyTorch architectures

  • Cost Transparency: Track quantum computing costs alongside traditional infrastructure

  • Scalability: Leverage existing distributed training infrastructure

  • Feature Requests: Suggest features for the final release

  • Early Testing: Join the beta program (weeks 7-8)

  • Documentation: Help improve examples and guides


Ready for the future of quantum machine learning? Star us on GitHub and join the discussion!