Skip to content

Old Versions (Retired)

This page contains historical documentation for retired Q-Store versions. These versions are no longer maintained or recommended for production use.

VersionReleaseStatusKey FeaturesSuperseded By
v3.2Q4 2024❌ RetiredComplete ML training capabilities, quantum neural networksv3.5
v3.3Q4 2024❌ Retired50-100x algorithmic optimization, SPSA gradient estimatorv3.5
v3.4Q4 2024❌ Retired8-10x speed improvement, true parallelizationv3.5
v3.5Q1 2025❌ Retired2-3x realistic improvements, multi-backend orchestrationv4.0

Migration Recommendation: All users should upgrade directly to v4.0 for the best experience.


These example projects work with q-store version 3.5 and below

Example Projects

Q-Store v3.2 introduced complete machine learning training capabilities with full hardware abstraction, enabling quantum neural networks that work seamlessly across simulators and quantum hardware.

  • Quantum Neural Network Layers: QuantumLayer, QuantumConvolutionalLayer, QuantumPoolingLayer
  • Gradient Computation: Parameter Shift Rule, Finite Difference, Natural Gradients
  • Data Encoding: Amplitude, Angle, Basis, and ZZ Feature Map encoding
  • Training Infrastructure: Complete training orchestration with Adam optimizer, checkpoint management

Train once, run anywhere across different quantum backends (mock, Cirq, Qiskit, IonQ).

Pre-train and fine-tune models with parameter freezing support.

  • Qubits: 2-8 qubits
  • Parameters: 6-48 trainable parameters
  • Training Time: ~45 seconds for 5 epochs (mock backend)
from q_store.core import QuantumTrainer, QuantumModel, TrainingConfig, BackendManager
config = TrainingConfig(
pinecone_api_key="your-api-key",
quantum_sdk="mock",
learning_rate=0.01,
epochs=10,
batch_size=5,
n_qubits=4
)
backend_manager = BackendManager(config)
backend = backend_manager.get_backend("mock_ideal")
model = QuantumModel(
input_dim=4,
output_dim=2,
n_layers=2,
backend=backend
)
trainer = QuantumTrainer(config, backend_manager)
history = await trainer.train(model, data_loader)

Q-Store v3.3 delivered 50-100x faster training through algorithmic optimization while maintaining full backward compatibility with v3.2.

Metricv3.2v3.3Improvement
Circuits per batch96010-2048-96x
Time per batch240s5-10s24-48x
Time per epoch40min50-100s24-48x
Memory usage500MB200MB2.5x better

Simultaneous Perturbation Stochastic Approximation - estimates ALL gradients with just 2 circuit evaluations instead of 2N evaluations for N parameters.

Batches multiple circuit executions into single API calls, reducing overhead by 5-10x.

Multi-level caching for quantum circuits to avoid redundant computations (2-5x speedup).

Optimized quantum layer with reduced gate count (33% fewer parameters).

Automatically selects the best gradient method based on training stage.

# v3.2 code
config = TrainingConfig(
learning_rate=0.01,
batch_size=32,
n_qubits=10,
circuit_depth=4
)
# v3.3 - just add one line for 24x speedup!
config = TrainingConfig(
learning_rate=0.01,
batch_size=32,
n_qubits=10,
circuit_depth=4,
gradient_method='spsa' # Add this!
)

Q-Store v3.4 delivered 8-10x faster training through true parallelization and hardware-native optimizations, achieving sub-60 second training epochs on IonQ hardware.

Metricv3.3v3.4Improvement
Batch time (20 circuits)35s3-5s7-12x faster
Circuits per second0.65-88-13x faster
Epoch time392s30-50s8-13x faster
Training (5 epochs)32 min2.5-4 min8-13x faster

True Parallel Batch Submission - submits multiple circuits in a single API call (12x faster submission).

Hardware-Native Gate Compilation - compiles to IonQ native gates (GPi, GPi2, MS) for 30% faster execution.

Template-Based Circuit Caching - caches circuit structure and dynamically binds parameters (10x faster preparation).

Integrated Optimization Pipeline - orchestrates all v3.4 optimizations with comprehensive tracking.

Dynamic Batch Optimization - adjusts batch size based on real-time queue conditions.

config = TrainingConfig(
# Enable all v3.4 features
enable_all_v34_features=True,
# Or selective enablement
use_batch_api=True,
use_native_gates=True,
enable_smart_caching=True,
connection_pool_size=5,
)

Version 3.5 focused on honest performance gains and addressing real bottlenecks with realistic 2-3x performance improvements through proven techniques.

  • ✅ Realistic 2-3x improvement through proven techniques
  • ✅ Multi-backend distribution for true parallel execution
  • ✅ Adaptive resource allocation based on training phase
  • ✅ Honest documentation with verified benchmarks

Distribute quantum circuit execution across multiple backends simultaneously for 2-3x throughput improvement.

Dynamically adjust circuit complexity during training for 30-40% faster execution.

Use minimum shots needed for gradient estimation, saving 20-30% execution time.

Replace SPSA with natural gradient for 2-3x fewer iterations to convergence.

Metricv3.4 Actualv3.5 Achieved
Circuits/sec0.571.3
Batch time (20 circuits)35s18s
Epoch time350s175s
Training (3 epochs)17.5 min8.5 min
Overall speedup1x2.1x
from q_store import TrainingConfig, QuantumTrainer
config = TrainingConfig(
# Multi-backend orchestration
enable_multi_backend=True,
backend_configs=[
{'provider': 'ionq', 'target': 'simulator', 'api_key': key1},
{'provider': 'ionq', 'target': 'simulator', 'api_key': key2},
{'provider': 'local', 'simulator': 'qiskit_aer', 'device': 'GPU'},
],
# Adaptive optimizations
adaptive_circuit_depth=True,
circuit_depth_schedule='exponential',
adaptive_shot_allocation=True,
# Advanced gradient methods
gradient_method='natural_gradient',
# Enable all v3.5 features
enable_all_v35_features=True,
)
trainer = QuantumTrainer(config)
await trainer.train(model, train_loader, val_loader)
  • Accuracy: 70-75% on Fashion MNIST (vs 88-90% classical)
  • Inference speed: ~2s per sample (vs <1ms classical)
  • Best for: Parameter-limited scenarios, few-shot learning

All users of retired versions should upgrade to v4.0:

  1. Update package: pip install --upgrade q-store>=4.0.0
  2. Review v4.0 documentation: Check the v4.0 release notes
  3. Update configuration: Migrate to v4.0 configuration format
  4. Test thoroughly: Validate performance and accuracy improvements

For migration assistance, please refer to the v4.0 migration guide or open a GitHub discussion.


Last Updated: December 2024 Current Recommended Version: v4.0