Releases: 77AXEL/PyCNN
PyCNN v2.5
PyCNN v2.5
Full Changelog: v2.4...v2.5
PyCNN-2.4
PyCNN v2.4
Full Changelog: v2.3...v2.4
PyCNN v2.3
📦 Release: PyCNN v2.3
Added PyTorch models exportation and datasets augmentation support
🚀 Key Features
- ✅ Fully functional CNN implementation from scratch
- 🧠 Manual convolution, max pooling, and ReLU activations
- 🔁 Forward and backward propagation with mini-batch gradient descent
- 🏷 Multi-class classification via softmax and cross-entropy loss
- 💾 Model save/load using
pickle - 🖼 RGB image preprocessing with customizable filters
- 🔍 Predict function to classify new unseen images
- 📊 Real-time training visualization (accuracy & loss per epoch)
- ⚡ Optional CUDA acceleration for faster training and inference
- 🆕 Adam optimizer support for improved training performance
- 🛠 Dynamic user-defined layers for fully customizable architectures
- 🚀 ** More CPU based performance optimizations** for faster computation and memory efficiency
- 🔄 Automatic backend conversion when loading models trained on a different backend
- 🛢️ Hugging Face CNN datasets support
- 🎚️ Dataset augmentation support
- 🔁 Pytorch exportation support to export PyCNN trained model to a PyTorch format
🖥️ PyCNN Usage
Training model
from pycnn.pycnn import PyCNN
pycnn= CNN()
pycnn.cuda(True) # Enable CUDA
pycnn.init(
image_size=64, # If unspecified the default is 64
batch_size=32, # If unspecified the default is 32
layers=[256, 128, 64, 32, 16, 8, 4], # Allows you to define any type of dense layer, If unspecified the default is [128, 64]
learning_rate=0.0001, # If unspecified the default is 0.0001
epochs=1000, # If unspecified the default is 50
filters = [
[# Custom filter 1],
[# Custom filter 2],
[# Custom filter 3],
[# Custom filter ...],
] # If unspecified, the library will use the default filters.
)
pycnn.adam() # If specified, the library will use the adam optimizer.
pycnn.dataset.local(
path_to_you_dataset_folder,
max_image=1000 # If unspecified, the library will use all images from each class.
) # Use this method if you want to load your local dataset folder.
pycnn.dataset.hf(
huggingface_dataset_name,
max_image=1000, # If unspecified, the library will use all images from each class.
cached=True, # Using the cached database helps you bypass downloading the dataset each time it is loaded (the default behavior when cached=True).
split="train", # Specify which split of the dataset to use for training the model (the default is the train split).
aug = [
1, # Left-Right Flip
2, # Top-Bottom Flip
3, # 90 degree rotation
4 # -90 degree rotation
] # Unspecify this setting if you don't want a dataset augmentation
) # Use this method if you want to load a HuggingFace dataset folder.
pycnn.train_model(
visualize=True, # Displays a real-time graph of accuracy and loss per epoch when enabled. Set to False or leave unspecified to disable this feature.
early_stop=2 # Stops training when overfitting begins and the number of epochs exceeds early_stop. Set to 0 or leave unspecified to disable this feature.
) Saving/Loading model
pycnn.save_model(path) # For saving models (if your your_save_path is unspecified the library will save it in "./model.bin" bu default)
pycnn.load_model(path) # For loading models
pycnn.torch(path) # For saving PyTorch compatible models (to use them in PyTorch later)Prediction
result = pycnn.predict(your_image_path) # Returns a tuple of (class name, confidense value)
print(result) The library will automatically convert weights, biases, and datasets to the selected backend. Models trained on GPU can still be loaded on CPU and vice versa.
Usage exemple
from pycnn.pycnn import PyCNN
from os import listdir
pycnn = PyCNN()
pycnn.cuda(True)
pycnn.init(
layers=[512, 256],
epochs=500,
)
pycnn.dataset.hf("cifar10", max_image=500, cached=True)
pycnn.adam()
pycnn.train_model(early_stop=15)
pycnn.save_model("pycnn_cifar10.bin")
testdir = "cifar10_test"
_max = 1000
for classname in listdir(testdir):
x = 0
correct = 0
for filename in listdir(f"{testdir}/{classname}"):
if x == _max:
break
if pycnn.predict(f"{testdir}/{classname}/{filename}")[0] == classname:
correct += 1
x += 1
print(classname, correct)Output:
- Total prediction accuracy: 48.1%, which is a strong result for a model trained on only 500 images per class.
Hardware used while training:
PyCNN with PyTorch
- Create a PyTorch CNN model with PyCNN:
from pycnn.pycnn import PyCNN
# Initialize PyCNN model
pycnn = PyCNN()
pycnn.init(
epochs=50,
layers=[64, 32],
learning_rate=0.0001
)
# Load dataset from Hugging Face
pycnn.dataset.hf("cifar10", max_image=50, aug=[])
# Configure Adam optimizer and train
pycnn.adam()
pycnn.train_model()
# Save the model in a PyTorch format
pycnn.torch("model.pth")- Load and use the model.pth in PyTorch:
from pycnn.pycnn import PyCNNTorchModel
from PIL import Image
import numpy as np
import torch
checkpoint = torch.load('model.pth', map_location='cpu')
model = PyCNNTorchModel(
checkpoint['layers'],
checkpoint['num_classes'],
checkpoint['filters'],
checkpoint['image_size']
)
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
def predict(image_path):
img = Image.open(image_path).convert("RGB")
img = img.resize((checkpoint['image_size'], checkpoint['image_size']), Image.Resampling.LANCZOS)
img_array = np.array(img).astype(np.float32) / 255.0
img_tensor = torch.from_numpy(img_array).permute(2, 0, 1).unsqueeze(0)
with torch.no_grad():
output = model(img_tensor)
confidence, predicted_idx = torch.max(output, 1)
predicted_class = checkpoint['classes'][predicted_idx.item()]
print(f"Prediction: {predicted_class} (Confidence: {confidence.item()*100:.2f}%)")
predict("exemple.png")🧾 Changelog
v2.3
- New PyTorch models exportation support
- New Datasets augmentation support
- HuggingFace CNN datasets support
- More CPU based performance optimizations using Cython
- Adam optimizer support for improved training performance
- Dynamic user-defined layers for fully customizable architectures
- Performance optimizations for faster computation and memory efficiency
- CUDA backend support via CuPy
- Automatic conversion between CPU and GPU backends
- Models can be trained on one backend and loaded on another seamlessly
- Minor improvements in training stability and performance
v2.2
- New HuggingFace CNN datasets support
- New More CPU based performance optimizations using Cython
- Adam optimizer support for improved training performance
- Dynamic user-defined layers for fully customizable architectures
- Performance optimizations for faster computation and memory efficiency
- CUDA backend support via CuPy
- Automatic conversion between CPU and GPU backends
- Models can be trained on one backend and loaded on another seamlessly
- Minor improvements in training stability and performance
v2.0
- New Adam optimizer support for improved training performance
- New Dynamic user-defined layers for fully customizable architectures
- New Performance optimizations for faster computation and memory efficiency
- CUDA backend support via CuPy
- Automatic conversion between CPU and GPU backends
- Models can be trained on one backend and loaded on another seamlessly
- Minor improvements in training stability and performance
v0.2.0
- New: CUDA backend support via CuPy
- Automatic conversion between CPU and GPU backends
- Models can be trained on one backend and loaded on another seamlessly
- Minor improvements in training stability and performance
v0.1.1
- Real-time training visualization with Matplotlib
v0.1.0
- Initial version with full training and prediction pipeline
📌 Installation
pip install git+https://github.com/77AXEL/PyCNN.git@v2.2Optional: Install CuPy for CUDA support:
pip install cupy-cuda118 # Match your CUDA version- See the CUDA Documentation for more information on how to set it up
💬 Feedback & Contributions
We welcome issues, suggestions, and contributions!
Check the Discussions tab or see CONTRIBUTING.md
🛡 Security
Found a security issue? Please report privately to:
📧 a.x.e.l777444000@gmail.com
📜 License
Released under the MIT License
PyCNN v2.2
📦 Release: PyCNN v2.2 (fixed some issues)
Added HuggingFace CNN datasets support, further performance improvements using Numba JIT and Cython and extra modifications to syntax usage.
🚀 Key Features
- ✅ Fully functional CNN implementation from scratch
- 🧠 Manual convolution, max pooling, and ReLU activations
- 🔁 Forward and backward propagation with mini-batch gradient descent
- 🏷 Multi-class classification via softmax and cross-entropy loss
- 💾 Model save/load using
pickle - 🖼 RGB image preprocessing with customizable filters
- 🔍 Predict function to classify new unseen images
- 📊 Real-time training visualization (accuracy & loss per epoch)
- ⚡ Optional CUDA acceleration for faster training and inference
- 🆕 Adam optimizer support for improved training performance
- 🛠 Dynamic user-defined layers for fully customizable architectures
- 🚀 ** More CPU based performance optimizations** for faster computation and memory efficiency
- 🔄 Automatic backend conversion when loading models trained on a different backend
- 🛢️ Hugging Face CNN datasets support
🖥️ PyCNN Usage
Training model
from pycnn.pycnn import PyCNN
pycnn= CNN()
pycnn.cuda(True) # Enable CUDA
pycnn.init(
image_size=64, # If unspecified the default is 64
batch_size=32, # If unspecified the default is 32
layers=[256, 128, 64, 32, 16, 8, 4], # Allows you to define any type of dense layer, If unspecified the default is [128, 64]
learning_rate=0.0001, # If unspecified the default is 0.0001
epochs=1000, # If unspecified the default is 50
filters = [
[# Custom filter 1],
[# Custom filter 2],
[# Custom filter 3],
[# Custom filter ...],
] # If unspecified, the framework will use the default filters.
)
pycnn.adam() # If specified, the framework will use the adam optimizer.
pycnn.dataset.local(
path_to_you_dataset_folder,
max_image=1000 # If unspecified, the framework will use all images from each class.
) # Use this method if you want to load your local dataset folder.
pycnn.dataset.hf(
huggingface_dataset_name,
max_image=1000, # If unspecified, the framework will use all images from each class.
cached=True # Using the cached database helps you bypass downloading the dataset each time it is loaded (the default behavior when cached=True).
split="train" # Specify which split of the dataset to use for training the model (the default is the train split).
) # Use this method if you want to load a HuggingFace dataset folder.
pycnn.train_model(
visualize=True, # Displays a real-time graph of accuracy and loss per epoch when enabled. Set to False or leave unspecified to disable this feature.
early_stop=2 # Stops training when overfitting begins and the number of epochs exceeds early_stop. Set to 0 or leave unspecified to disable this feature.
) Saving/Loading model
pycnn.save_model(path=your_save_path) # if your your_save_path is unspecified the framework will save it in "./model.bin" bu default
pycnn.load_model(path=your_model_path)Prediction
result = pycnn.predict(your_image_path) # Returns a tuple of (class name, confidense value)
print(result) The framework will automatically convert weights, biases, and datasets to the selected backend. Models trained on GPU can still be loaded on CPU and vice versa.
Usage exemple
from pycnn.pycnn import PyCNN
from os import listdir
pycnn = PyCNN()
pycnn.cuda(True)
pycnn.init(
layers=[512, 256],
epochs=500,
)
pycnn.dataset.hf("cifar10", max_image=500, cached=True)
pycnn.adam()
pycnn.train_model(early_stop=15)
pycnn.save_model("pycnn_cifar10.bin")
testdir = "cifar10_test"
_max = 1000
for classname in listdir(testdir):
x = 0
correct = 0
for filename in listdir(f"{testdir}/{classname}"):
if x == _max:
break
if pycnn.predict(f"{testdir}/{classname}/{filename}")[0] == classname:
correct += 1
x += 1
print(classname, correct)🧾 Changelog
v2.2
- New HuggingFace CNN datasets support
- New More CPU based performance optimizations using Cython
- Adam optimizer support for improved training performance
- Dynamic user-defined layers for fully customizable architectures
- Performance optimizations for faster computation and memory efficiency
- CUDA backend support via CuPy
- Automatic conversion between CPU and GPU backends
- Models can be trained on one backend and loaded on another seamlessly
- Minor improvements in training stability and performance
v2.0
- New Adam optimizer support for improved training performance
- New Dynamic user-defined layers for fully customizable architectures
- New Performance optimizations for faster computation and memory efficiency
- CUDA backend support via CuPy
- Automatic conversion between CPU and GPU backends
- Models can be trained on one backend and loaded on another seamlessly
- Minor improvements in training stability and performance
v0.2.0
- New: CUDA backend support via CuPy
- Automatic conversion between CPU and GPU backends
- Models can be trained on one backend and loaded on another seamlessly
- Minor improvements in training stability and performance
v0.1.1
- Real-time training visualization with Matplotlib
v0.1.0
- Initial version with full training and prediction pipeline
📌 Installation
pip install git+https://github.com/77AXEL/PyCNN.git@v2.2Optional: Install CuPy for CUDA support:
pip install cupy-cuda118 # Match your CUDA version- See the CUDA Documentation for more information on how to set it up
💬 Feedback & Contributions
We welcome issues, suggestions, and contributions!
Check the Discussions tab or see CONTRIBUTING.md
🛡 Security
Found a security issue? Please report privately to:
📧 a.x.e.l777444000@gmail.com
📜 License
Released under the MIT License
PyCNN v2.0
📦 Release: PyCNN v2.0
This update enhances PyCNN by adding CUDA (GPU) support for accelerated training and inference when a compatible GPU and CuPy are available. It introduces Adam optimizer support, dynamic layer definitions for user-customized architectures, and various performance optimizations, while allowing users to seamlessly toggle between CPU and GPU backends.
🚀 Key Features
- ✅ Fully functional CNN implementation from scratch
- 🧠 Manual convolution, max pooling, and ReLU activations
- 🔁 Forward and backward propagation with mini-batch gradient descent
- 🏷 Multi-class classification via softmax and cross-entropy loss
- 💾 Model save/load using
pickle - 🖼 RGB image preprocessing with customizable filters
- 🔍 Predict function to classify new unseen images
- 📊 Real-time training visualization (accuracy & loss per epoch)
- ⚡ Optional CUDA acceleration for faster training and inference
- 🆕 Adam optimizer support for improved training performance
- 🛠 Dynamic user-defined layers for fully customizable architectures
- 🚀 Performance optimizations for faster computation and memory efficiency
- 🔄 Automatic backend conversion when loading models trained on a different backend
🖥️ PyCNN Usage
Training model
from pycnn.model import CNN
model = CNN()
model.cuda(True) # Enable CUDA
model.init(
image_size=64,
batch_size=32,
layers=[256, 128, 64, 32, 16, 8, 4], # Allows you to define any type of dense layer.
learning_rate=0.0001,
epochs=1000,
dataset_path="data",
max_image=1000, # If unspecified, the framework will use all images from each class.
filters = [
[# Custom filter 1],
[# Custom filter 2],
[# Custom filter 3],
[# Custom filter ...],
] # If unspecified, the framework will use the default filters
)
model.adam() # If specified, the framework will use the adam optimizer
model.load_dataset()
model.train_model(visualize=True, early_stop=2)
#visualize: Displays a real-time graph of accuracy and loss per epoch when enabled. Set to False or leave unspecified to disable this feature.
#early_stop: Stops training when overfitting begins and the number of epochs exceeds early_stop. Set to 0 or leave unspecified to disable this feature.Saving/Loading model
model.save(path=your_save_path) # if your your_save_path is unspecified the framework will save it in "./model.bin"
model.save(path=your_model_path)Prediction
result = model.predict(your_image_path)
print(result)The model will automatically convert weights, biases, and datasets to the selected backend. Models trained on GPU can still be loaded on CPU and vice versa.
Usage exemple
from pycnn.model import CNN
from os import listdir
model = CNN()
model.init(
image_size=64,
batch_size=32,
layers=[256, 128, 64, 32, 16, 8, 4],
learning_rate=0.0001,
epochs=1000,
dataset_path="data",
max_image=100,
)
model.adam()
model.load_dataset()
model.train_model(early_stop=2)
x = 0
for path in listdir("data/cat"):
if model.predict(f"data/cat/{path}") == "cat":
x += 1
if x == 10:
break
print(f"cat: {x}/10")
x = 0
for path in listdir("data/dog"):
if model.predict(f"data/dog/{path}") == "dog":
x += 1
if x == 10:
break
print(f"dog: {x}/10")🧾 Changelog
v2.0
- New Adam optimizer support for improved training performance
- New Dynamic user-defined layers for fully customizable architectures
- New Performance optimizations for faster computation and memory efficiency
- CUDA backend support via CuPy
- Automatic conversion between CPU and GPU backends
- Models can be trained on one backend and loaded on another seamlessly
- Minor improvements in training stability and performance
v0.2.0
- New: CUDA backend support via CuPy
- Automatic conversion between CPU and GPU backends
- Models can be trained on one backend and loaded on another seamlessly
- Minor improvements in training stability and performance
v0.1.1
- Real-time training visualization with Matplotlib
v0.1.0
- Initial version with full training and prediction pipeline
📌 Installation
pip install git+https://github.com/77AXEL/PyCNN.git@v2.0Optional: Install CuPy for CUDA support:
pip install cupy-cuda118 # Match your CUDA version- See the CUDA Documentation for more information on how to set it up
💬 Feedback & Contributions
We welcome issues, suggestions, and contributions!
Check the Discussions tab or see CONTRIBUTING.md
🛡 Security
Found a security issue? Please report privately to:
📧 a.x.e.l777444000@gmail.com
📜 License
Released under the MIT License
Added CUDA support
📦 Release: v0.2.0 – Added support for CUDA
This update enhances PyCNN by adding CUDA (GPU) support, allowing accelerated training and inference when a compatible GPU and CuPy are available. Users can now toggle between CPU and GPU backends seamlessly.
🚀 Key Features
- ✅ Fully functional CNN implementation from scratch
- 🧠 Manual convolution, max pooling, and ReLU activations
- 🔁 Forward and backward propagation with mini-batch gradient descent
- 🏷 Multi-class classification via softmax and cross-entropy loss
- 💾 Model save/load using
pickle - 🖼 RGB image preprocessing with customizable filters
- 🔍 Predict function to classify new unseen images
- 📊 Real-time training visualization (accuracy & loss per epoch)
- ⚡ New: Optional CUDA acceleration for faster training and inference
- 🔄 Automatic backend conversion when loading models trained on a different backend
🖥️ CUDA Usage
Enable CUDA (GPU) support:
from pycnn.model import CNN
model = CNN()
model.cuda(True) # Enable CUDASwitch back to CPU:
model.cuda(False) # Disable CUDAThe model will automatically convert weights, biases, and datasets to the selected backend. Models trained on GPU can still be loaded on CPU and vice versa.
🏁 Training & Prediction
Training and prediction remain the same as previous versions. Example:
model.init(
image_size=32,
batch_size=32,
h1=128,
h2=64,
learning_rate=0.01,
epochs=10,
dataset_path="data",
max_image=200
)
model.load_dataset()
model.train_model(visualize=True)
model.save_model()
model.load_model("model.bin")
result = model.predict("path/to/image.png")
print("Prediction:", result)🧾 Changelog
v0.2.0
- New: CUDA backend support via CuPy
- Automatic conversion between CPU and GPU backends
- Models can be trained on one backend and loaded on another seamlessly
- Minor improvements in training stability and performance
v0.1.1
- Real-time training visualization with Matplotlib
v0.1.0
- Initial version with full training and prediction pipeline
📌 Installation
pip install git+https://github.com/77AXEL/PyCNN.git@v0.2.0Optional: Install CuPy for CUDA support:
pip install cupy-cuda118 # Match your CUDA version💬 Feedback & Contributions
We welcome issues, suggestions, and contributions!
Check the Discussions tab or see CONTRIBUTING.md
🛡 Security
Found a security issue? Please report privately to:
📧 a.x.e.l777444000@gmail.com
📜 License
Released under the MIT License
v0.1.1
📦 Release: v0.1.1 – Training Visualization Added
This update enhances PyCNN by adding real-time training visualization for accuracy and loss per epoch using Matplotlib — making it easier to monitor model performance while training.
🚀 Key Features
- ✅ Fully functional CNN implementation from scratch
- 🧠 Manual convolution, max pooling, and ReLU activations
- 🔁 Forward and backward propagation with mini-batch gradient descent
- 🏷 Multi-class classification via softmax and cross-entropy loss
- 💾 Model save/load using
pickle - 🖼 RGB image preprocessing with customizable filters
- 🔍 Predict function to classify new unseen images
- 📊 New: Real-time training visualization (accuracy & loss per epoch) in
train_model(visualize=True)
🖼️ Preview
📌 Installation
📦 Install directly from GitHub:
pip install git+https://github.com/77AXEL/PyCNN.git@v0.1.1🧪 How to Use
🏁 Train the model with visualization:
from cnnfs.model import CNN
model = CNN()
model.init(
image_size=32,
batch_size=32,
h1=128,
h2=64,
learning_rate=0.01,
epochs=10,
dataset_path="data",
max_image=200
)
model.load_dataset()
model.train_model(visualize=True) # <-- visualization enabled (Without this setting, the visualization won’t appear)
model.save_model()🔍 Predict an image:
model.load_model("model.bin")
result = model.predict("path/to/image.png")
print("Prediction:", result)🗂 Included Filters
PyCNN uses a set of basic filters for feature extraction:
- Sharpen
- Vertical edges
- Laplacian (high-pass)
🧾 Changelog
v0.1.1
- New: Real-time training visualization (accuracy & loss per epoch) with Matplotlib.
- Dark theme visualization with clear, color-coded curves.
- Optional: Enable by calling
train_model(visualize=True).
v0.1.0
- Initial version with full training and prediction pipeline
- Manual convolution + pooling + ReLU
- Batch softmax + cross-entropy loss
- Two hidden layers and output layer
- Save/load model capability
- Example dataset folder included
- Beginner-friendly structure for learning how CNNs work
💬 Feedback & Contributions
We welcome issues, suggestions, and contributions!
Join the conversation in the [[Discussions tab](https://github.com/77AXEL/PyCNN/discussions)]\([https://github.com/77AXEL/PyCNN/discussions](https://github.com/77AXEL/PyCNN/discussions)) or check the [[CONTRIBUTING.md](https://github.com/77AXEL/PyCNN/?tab=coc-ov-file)]\(./CONTRIBUTING.md) guide.
🛡 Security
Found a security issue? Please report it privately to:
📧 a.x.e.l777444000@gmail.com
📜 License
Released under the MIT License
v0.1.0
📦 Release: v0.1.0 – Initial Working Version
This is the first official release of PyCNN — a lightweight and educational CNN built using only NumPy, SciPy, and Pillow, with zero reliance on machine learning frameworks.
🚀 Key Features
- ✅ Fully functional CNN implementation from scratch
- 🧠 Manual convolution, max pooling, and ReLU activations
- 🔁 Forward and backward propagation with mini-batch gradient descent
- 🏷 Multi-class classification via softmax and cross-entropy loss
- 💾 Model save/load using
pickle - 🖼 RGB image preprocessing with customizable filters
- 🔍 Predict function to classify new unseen images
📌 Installation
📦 install directly from GitHub:
pip install git+https://github.com/77AXEL/PyCNN.git@v0.1.0🧪 How to Use
🏁 Train the model:
from cnnfs.model import CNN
model = CNN()
model.init(
image_size=32,
batch_size=32,
h1=128,
h2=64,
learning_rate=0.01,
epochs=10,
dataset_path="data",
max_image=200
)
model.load_dataset()
model.train_model()
model.save_model()🔍 Predict an image:
model.load_model("model.bin")
result = model.predict("path/to/image.png")
print("Prediction:", result)🗂 Included Filters
PyCNN uses a set of basic filters for feature extraction:
- Sharpen
- Vertical edges
- Laplacian (high-pass)
🧾 Changelog
Initial versionwith full training and prediction pipeline- Manual convolution + pooling + ReLU
- Batch softmax + cross-entropy loss
- Two hidden layers and output layer
- Save/load model capability
- Example dataset folder included
- Beginner-friendly structure for learning how CNNs work
💬 Feedback & Contributions
We welcome issues, suggestions, and contributions!
Join the conversation in the [Discussions tab](https://github.com/77AXEL/PyCNN/discussions) or check the [CONTRIBUTING.md](./CONTRIBUTING.md) guide.
🛡 Security
Found a security issue? Please report it privately to:
📧 a.x.e.l777444000@gmail.com
📜 License
Released under the MIT License



