Search This Blog

Saturday, 4 January 2025

DIY Game Console Using SSD1306 OLED Display and NodeMCU/Arduino UNO



DIY Game Console Using OLED Display and NodeMCU/Arduino UNO

Have you ever wondered if you could create your own handheld game console? In this project, I built a compact and functional game console using a 0.96-inch OLED display and a NodeMCU or Arduino UNO. This console is not only a fun project but also a great way to dive into game development and hardware programming.



Project Highlights

  1. Hardware Features:

    • Display: A crisp, 0.96-inch OLED display that brings the game graphics to life.
    • Controller: Designed with buttons for user input, providing a classic gaming feel.
    • Microcontroller: Choose between NodeMCU for Wi-Fi capabilities or Arduino UNO for a simpler setup.
  2. Software Development:

    • Programmed with libraries for handling the OLED display and managing game logic.
    • Simple, retro-style games developed to showcase the console's capabilities.
  3. Applications:

    • Learn about display handling, game logic, and embedded systems.
    • Expandable to include more games or additional features.


Challenges and Learnings

During this project, I explored handling small displays, optimizing microcontroller performance, and balancing functionality with hardware constraints. It’s a rewarding experience for anyone interested in IoT, game development, or Arduino programming.



Why You Should Try It

This project is perfect for hobbyists, makers, or students wanting to combine creativity and technology. With affordable components and endless customization options, you can make it uniquely yours.



Components Required

  • NodeMCU or Arduino UNO Microcontroller
  • 4 Push Buttons
  • Tools like Soldering Iron, Hot Glue Gun
  • OLED 0.96 inch I2C Display


Circuit diagram

This is Circuit Diagram for this project, I used TinkerCAD to make this diagram, so SSD1306 OLED Display was not available, that's why I used 16x2 i2c Display, because these both display is using same communication method which is I2C communication.
So you can replace 16x2 display connections with your 0.96 inch OLED Display.



Upload the Code

#define SSD1306_I2C_ADDRESS 0x3C

#include <Wire.h>
#include <Adafruit_GFX.h>
#include <Adafruit_SSD1306.h>

// OLED display size
#define SCREEN_WIDTH 128
#define SCREEN_HEIGHT 64
#define OLED_RESET -1  // Reset pin not used

Adafruit_SSD1306 display(SCREEN_WIDTH, SCREEN_HEIGHT, &Wire, OLED_RESET);

// Button pins
#define UP_BUTTON 00
#define LEFT_BUTTON 14
#define DOWN_BUTTON 13
#define RIGHT_BUTTON 12

// Snake parameters
#define MAX_LENGTH 100
int snakeX[MAX_LENGTH];
int snakeY[MAX_LENGTH];
int snakeLength = 5;
int foodX, foodY;

// Movement direction
int directionX = 1;
int directionY = 0;

// Game state
bool gameOver = false;

// Debounce variables
unsigned long lastDebounceTime = 0;
const unsigned long debounceDelay = 50; // 50ms debounce delay

void setup() {
  // Initialize buttons as inputs with internal pull-up resistors
  pinMode(UP_BUTTON, INPUT_PULLUP);
  pinMode(LEFT_BUTTON, INPUT_PULLUP);
  pinMode(DOWN_BUTTON, INPUT_PULLUP);
  pinMode(RIGHT_BUTTON, INPUT_PULLUP);

  // Initialize display
  if (!display.begin(SSD1306_I2C_ADDRESS, 0x3C)) {
    Serial.println(F("SSD1306 allocation failed"));
    for (;;);
  }
  display.clearDisplay();

  // Initialize snake position
  for (int i = 0; i < snakeLength; i++) {
    snakeX[i] = 64 - i;
    snakeY[i] = 32;
  }

  // Place initial food
  placeFood();
}

void loop() {
  if (gameOver) {
    showGameOver();
    return;
  }

  // Handle button presses with debounce
  handleButtonPress();

  // Move the snake
  moveSnake();

  // Check for collisions
  checkCollision();

  // Update display
  drawGame();

  // Delay for game speed
  delay(200);
}

void placeFood() {
  foodX = random(0, SCREEN_WIDTH / 4) * 4;
  foodY = random(0, SCREEN_HEIGHT / 4) * 4;
}

void moveSnake() {
  // Move body
  for (int i = snakeLength - 1; i > 0; i--) {
    snakeX[i] = snakeX[i - 1];
    snakeY[i] = snakeY[i - 1];
  }

  // Move head
  snakeX[0] += directionX * 4;
  snakeY[0] += directionY * 4;

  // Check if food is eaten
  if (snakeX[0] == foodX && snakeY[0] == foodY) {
    if (snakeLength < MAX_LENGTH) {
      snakeLength++;
    }
    placeFood();
  }
}

void checkCollision() {
  // Check wall collision
  if (snakeX[0] < 0 || snakeX[0] >= SCREEN_WIDTH || snakeY[0] < 0 || snakeY[0] >= SCREEN_HEIGHT) {
    gameOver = true;
  }

  // Check self-collision
  for (int i = 1; i < snakeLength; i++) {
    if (snakeX[0] == snakeX[i] && snakeY[0] == snakeY[i]) {
      gameOver = true;
      break;
    }
  }
}

void handleButtonPress() {
  unsigned long currentTime = millis();

  // Check UP button
  if (digitalRead(UP_BUTTON) == LOW && currentTime - lastDebounceTime > debounceDelay && directionY == 0) {
    directionX = 0;
    directionY = -1;
    lastDebounceTime = currentTime;
   
  }
  // Check LEFT button
  else if (digitalRead(LEFT_BUTTON) == LOW && currentTime - lastDebounceTime > debounceDelay && directionX == 0) {
    directionX = -1;
    directionY = 0;
    lastDebounceTime = currentTime;
  }
  // Check DOWN button
  else if (digitalRead(DOWN_BUTTON) == LOW && currentTime - lastDebounceTime > debounceDelay && directionY == 0) {
    directionX = 0;
    directionY = 1;
    lastDebounceTime = currentTime;
  }
  // Check RIGHT button
  else if (digitalRead(RIGHT_BUTTON) == LOW && currentTime - lastDebounceTime > debounceDelay && directionX == 0) {
    directionX = 1;
    directionY = 0;
    lastDebounceTime = currentTime;
  }

  else {
    pinMode(UP_BUTTON, INPUT_PULLUP);
  }
}

void drawGame() {
  display.clearDisplay();

  // Draw the snake
  for (int i = 0; i < snakeLength; i++) {
    display.fillRect(snakeX[i], snakeY[i], 4, 4, SSD1306_WHITE);
  }

  // Draw the food
  display.fillRect(foodX, foodY, 4, 4, SSD1306_WHITE);

  // Show the updated screen
  display.display();
}

void showGameOver() {
  display.clearDisplay();
  display.setTextSize(2);
  display.setTextColor(SSD1306_WHITE);
  display.setCursor(10, 25);
  display.println("Game Over!");
  display.display();
  while (true); // Stop the game
}

This is the code you can copy and paste if you used same button and display connections. If you changed the pin connection, you can change this code from line 14 - 18

// Button pins
#define UP_BUTTON 00
#define LEFT_BUTTON 14
#define DOWN_BUTTON 13
#define RIGHT_BUTTON 12


YOU CAN WATCH THIS VIDEO -



 https://youtu.be/cTQhhXtyOdo


This project showcases how simple components like an OLED display and a microcontroller can come together to create something amazing. The possibilities are endless, and the journey of learning and building is the most exciting part. Happy making!

Tuesday, 10 December 2024

how to implement YOLOv3 using Python and TensorFlow

Object Detection with YOLOv3

Introduction

YOLOv3 (You Only Look Once version 3) is a real-time object detection algorithm that can detect objects in images and videos at high speed. It is one of the most popular object detection algorithms due to its speed and accuracy.

Uses of this Project

  • Surveillance and security
  • Self-driving cars
  • Medical imaging
  • Robotics
  • Manufacturing

Requirements

  • Python 3.6 or later
  • TensorFlow 2.0 or later
  • CUDA 10.0 or later
  • YOLOv3 weights
  • Image or video to detect objects in

Implementation

To implement YOLOv3, you can follow these steps:

  1. Install the required dependencies.
  2. Load the YOLOv3 weights into a TensorFlow model.
  3. Preprocess the image or video to be detected.
  4. Run the YOLOv3 model on the preprocessed image or video.
  5. Postprocess the output of the YOLOv3 model to get the bounding boxes and class labels of the detected objects.

Example Code

import tensorflow as tf
import cv2
import numpy as np
from tensorflow.keras.preprocessing import image

# Load the YOLOv3 model
model = tf.keras.models.load_model('yolov3.h5')

# Preprocess the image
img = image.load_img('image.jpg')
img = img.resize((416, 416))  # Resize image to YOLOv3 input size
img_array = image.img_to_array(img)
img_array = np.expand_dims(img_array, axis=0)  # Add batch dimension
img_array = img_array / 255.0  # Normalize image to [0, 1] range

# Run the YOLOv3 model on the image
output = model.predict(img_array)

# Postprocess the output
boxes, scores, classes, valid_detections = tf.image.combined_non_max_suppression(
    boxes=output[0],
    scores=output[1],
    max_output_size_per_class=100,
    max_total_size=100,
    iou_threshold=0.5,
    score_threshold=0.5
)

# Convert output to numpy arrays for easier manipulation
boxes = boxes.numpy()
scores = scores.numpy()
classes = classes.numpy()

# Draw bounding boxes and labels on the image
img_cv = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)

for i in range(valid_detections.numpy()):
    ymin, xmin, ymax, xmax = boxes[i]
    class_id = int(classes[i])
    score = scores[i]
    label = f'Class {class_id}: {score:.2f}'

    # Draw bounding box and label
    cv2.rectangle

Conclusion

YOLOv3 is a powerful object detection algorithm that can be used for a variety of applications. It is relatively easy to implement and can achieve high accuracy. If you are looking for a real-time object detection algorithm, YOLOv3 is a great option.

```

**Object Detection with YOLOv3: A Comprehensive Guide** Object detection is a fundamental task in computer vision, enabling machines to identify and locate objects within images and videos. Among the various object detection algorithms, YOLOv3 (You Only Look Once version 3) stands out for its exceptional speed and accuracy. This comprehensive guide provides a detailed overview of YOLOv3, covering its architecture, implementation, and applications. We delve into the inner workings of the algorithm, explaining how it efficiently detects objects in real-time. Furthermore, we provide step-by-step instructions on how to implement YOLOv3 using Python and TensorFlow, making it accessible to both beginners and experienced practitioners. We also discuss the requirements, resources, and best practices for successful YOLOv3 implementation. Whether you're a researcher, developer, or enthusiast interested in object detection, this guide empowers you with the knowledge and tools to leverage YOLOv3's capabilities in your own projects.

Tuesday, 3 December 2024

Introduction to Generative Adversarial Networks (GANs) Unsupervised Machine Learning

Generative Adversarial Networks (GANs)

Introduction

GANs are a type of unsupervised learning model that can generate new data that is similar to real data. They consist of two competing networks: a generator that creates synthetic data and a discriminator that distinguishes between real and generated data.

Uses of GANs

  • Generating realistic images, videos, and music
  • Creating new data for training other machine learning models
  • Improving the performance of existing machine learning models

Requirements

  • A deep learning framework such as TensorFlow or PyTorch
  • A dataset of real data
  • A computer with a GPU

How GANs Work

GANs work by training the generator and discriminator networks simultaneously. The generator network takes random noise as input and produces synthetic data. The discriminator network takes both real data and synthetic data as input and tries to distinguish between the two.

The generator network is trained to minimize the loss function, which is a measure of how well the discriminator network is able to distinguish between real and synthetic data. The discriminator network is trained to maximize the loss function.

As the generator and discriminator networks train, they become better at their respective tasks. The generator network learns to produce more realistic data, and the discriminator network learns to better distinguish between real and synthetic data.

Conclusion

GANs are a powerful tool for generating new data. They have a wide range of applications, including image generation, video generation, and music generation. GANs are still under development, but they have the potential to revolutionize many industries.

```

Generative Adversarial Networks (GANs) are a type of unsupervised learning model that can generate new data that is similar to real data. They consist of two competing networks: a generator that creates synthetic data and a discriminator that distinguishes between real and generated data. GANs have a wide range of applications, including image generation, video generation, and music generation. They are also being used to improve the performance of existing machine learning models and to create new data for training other machine learning models. GANs are still under development, but they have the potential to revolutionize many industries. For example, GANs could be used to create new drugs, design new materials, and develop new medical treatments. Here is a 140-word description of GANs: **Generative Adversarial Networks (GANs) are a type of unsupervised learning model that can generate new data that is similar to real data. They consist of two competing networks: a generator that creates synthetic data and a discriminator that distinguishes between real and generated data. GANs have a wide range of applications, including image generation, video generation, and music generation. They are also being used to improve the performance of existing machine learning models and to create new data for training other machine learning models. GANs are still under development, but they have the potential to revolutionize many industries.**

Friday, 22 November 2024

Mastering the Linux and Shell: A Comprehensive Guide to Command-Line Efficiency and Scripting

Learn Linux and Shell with examples


 

What is a Shell?

A shell is a user interface that provides access to the operating system's services. It acts as an intermediary between the user and the operating system kernel, interpreting and executing commands input by the user. The term "shell" refers to its role as a layer around the kernel, enabling users to interact with the system via command-line or script-based interfaces.

In the Unix/Linux ecosystem, a shell is often a command-line interpreter (CLI). Common examples include Bash (Bourne Again Shell), Zsh (Z Shell), Ksh (Korn Shell), and Fish (Friendly Interactive Shell). Shells can run commands, automate repetitive tasks, and execute shell scripts, which are sequences of commands written in files.


Real-World Applications of Shell Scripting

  1. System Administration: Automating backups, user management, and log rotation.
  2. Data Processing: Parsing logs, processing files, and data transformation.
  3. Development: Setting up environments, running builds, and deploying applications.
  4. Networking: Monitoring servers, transferring files, and managing connections.

Advantages of Using a Shell

  1. Powerful Command Execution:
    The shell provides direct access to all underlying system utilities, allowing users to perform a wide variety of tasks with a single command.

  2. Scripting and Automation:
    Shell scripts allow users to automate repetitive tasks, such as backups, deployments, and system monitoring, saving time and effort.

  3. Customizability:
    Users can customize their shell environment by setting aliases, modifying the prompt, and defining environment variables.

  4. Integration with Unix Tools:
    The shell seamlessly integrates with powerful command-line tools like grep, awk, sed, and find, enabling complex operations on files and data.

  5. Portability:
    Shell scripts can often run on multiple Unix/Linux systems without modification, making them highly portable.

  6. Efficiency:
    Shells allow direct manipulation of files, processes, and networks, often faster than using GUI tools for similar tasks.

  7. Lightweight Interface:
    Shells consume minimal system resources, making them ideal for use in servers and resource-constrained environments.


Shell Scripting:

Learn shell scripting from basics to advanced concepts with examples

1. What is Shell Scripting?

Shell scripting is writing a series of commands for the shell to execute. It automates repetitive tasks, manages files, and interacts with the operating system efficiently.

2. Displaying Output

Use echo to display text on the terminal.

# Display a message
echo "Hello, Shell Scripting!"

3. Variables

Define and use variables in your scripts.

# Define a variable
name="John Doe"

# Use the variable
echo "Hello, $name!"

4. Taking Input

Use read to take user input.

# Prompt the user for input
echo "Enter your name:"
read user_name
echo "Hello, $user_name!"

5. Conditional Statements

Use if, else, and elif for decision-making.

# Check if a file exists
if [ -f "example.txt" ]; then
    echo "File exists."
else
    echo "File does not exist."
fi

6. Loops

For Loop

# Example of a for loop
for i in 1 2 3; do
    echo "Number: $i"
done

While Loop

# Example of a while loop
count=1
while [ $count -le 5 ]; do
    echo "Count: $count"
    count=$((count + 1))
done

7. Functions

Encapsulate reusable code with functions.

# Define and call a function
greet() {
    echo "Hello, $1!"
}
greet "Alice"

8. File Operations

Create, read, and delete files.

# Create a file
echo "Sample Text" > file.txt

# Read the file
cat file.txt

# Delete the file
rm file.txt

9. File Permissions

Modify file permissions with chmod.

# Make a file executable
chmod +x script.sh

10. Redirecting Input and Output

# Write output to a file
echo "Hello" > file.txt

# Append to a file
echo "World" >> file.txt

# Redirect input from a file
cat < file.txt

11. Piping Commands

Use pipes (|) to pass the output of one command as input to another.

# Count lines in a file
cat file.txt | wc -l

12. Arrays

# Define an array
my_array=(one two three)

# Access array elements
echo ${my_array[1]}

13. Scheduling Tasks with Cron

# Open the cron editor
crontab -e

# Schedule a task (run daily at midnight)
0 0 * * * /path/to/script.sh

14. Exit Status

Check the status of the last executed command.

# Check if a command was successful
if [ $? -eq 0 ]; then
    echo "Command succeeded."
else
    echo "Command failed."
fi

15. Debugging Scripts

# Enable debugging
bash -x script.sh
 

Conclusion

Shell scripting is a powerful tool that enhances productivity, simplifies complex tasks, and provides deep control over the operating system. Mastering the shell involves learning its commands, control structures, and integrations with system tools. It is essential for anyone working with Unix/Linux systems, whether as a developer, system administrator, or data analyst.

 

Wednesday, 20 November 2024

PWM Motor Control: control the power supplied to a load by varying the width of the pulses in a digital signal

 

Beginner Embedded C++ Project: PWM Motor Control



Introduction

Pulse-width modulation (PWM) is a technique used to control the power supplied to a load by varying the width of the pulses in a digital signal. This allows us to control the speed and direction of a DC motor.

Uses of this Project

  • Control the speed of a fan
  • Control the direction of a robot
  • Dim the brightness of an LED

Requirements

  • A microcontroller with a PWM peripheral
  • A DC motor
  • A motor driver (if the motor requires more current than the microcontroller can provide)
  • Some wires

Code


// Define the PWM pin
const int pwmPin = 9;

// Define the motor driver pins
const int motorA = 10;
const int motorB = 11;

void setup() {
  // Set the PWM pin to output mode
  pinMode(pwmPin, OUTPUT);

  // Set the motor driver pins to output mode
  pinMode(motorA, OUTPUT);
  pinMode(motorB, OUTPUT);
}

void loop() {
  // Set the duty cycle of the PWM signal to control the speed of the motor
  analogWrite(pwmPin, 128);

  // Set the direction of the motor by setting the motor driver pins
  digitalWrite(motorA, HIGH);
  digitalWrite(motorB, LOW);

  // Delay for 1 second
  delay(1000);

  // Reverse the direction of the motor
  digitalWrite(motorA, LOW);
  digitalWrite(motorB, HIGH);

  // Delay for 1 second
  delay(1000);
}

Conclusion

PWM motor control is a powerful technique that can be used to control the speed and direction of a DC motor. This project is a great way to learn how to use PWM and get started with embedded C++ development.

```

**PWM Motor Control: A Beginner Embedded C++ Project** Pulse-width modulation (PWM) is a technique used to control the power supplied to a load by varying the width of the pulses in a digital signal. This allows us to control the speed and direction of a DC motor. This beginner-friendly Embedded C++ project will guide you through the steps of controlling a DC motor using PWM. You will learn how to set up the hardware, write the code, and control the motor's speed and direction. **Requirements:** * Microcontroller with PWM peripheral * DC motor * Motor driver (if required) * Wires **Benefits:** * Learn the basics of PWM * Gain hands-on experience with embedded C++ * Control the speed and direction of a DC motor This project is suitable for beginners with basic knowledge of electronics and programming. It is a great way to get started with embedded C++ development and learn a valuable technique for controlling motors.

Monday, 18 November 2024

Step-by-step guide to getting started with TensorFlow, an open-source machine learning library

 


                            TensorFlow First Model

Introduction

TensorFlow is an open-source machine learning library developed by Google. It is used for a wide range of machine learning tasks, including image classification, object detection, natural language processing, and time series analysis.

Uses of this Project

  • Build and train machine learning models
  • Deploy machine learning models to production
  • Develop new machine learning algorithms
  • Contribute to the TensorFlow community

Requirements

  • Python 3.6 or later
  • TensorFlow 2.0 or later
  • A text editor or IDE

Getting Started

To get started with this project, you will need to install TensorFlow and create a new Python project.

Installing TensorFlow

pip install tensorflow

Creating a New Python Project

mkdir my_project
cd my_project
python3 -m venv venv
source venv/bin/activate

Creating Your First Model

Now that you have TensorFlow installed and a new Python project created, you can create your first model.

import tensorflow as tf

# Create a simple model
model = tf.keras.models.Sequential([
  tf.keras.layers.Dense(units=10, activation='relu', input_shape=(784,)),
  tf.keras.layers.Dense(units=10, activation='relu'),
  tf.keras.layers.Dense(units=10, activation='softmax')
])

# Compile the model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# Train the model
model.fit(x_train, y_train, epochs=10)

# Evaluate the model
model.evaluate(x_test, y_test)

Next Steps

Now that you have created your first model, you can explore the many other features that TensorFlow has to offer. Here are a few ideas for next steps:

  • Learn more about the different types of machine learning models that you can build with TensorFlow.
  • Explore the TensorFlow documentation to learn how to use the library's many features.
  • Contribute to the TensorFlow community by sharing your projects and ideas.

Conclusion

TensorFlow is a powerful machine learning library that can be used to build a wide range of machine learning models. This tutorial has provided you with a basic introduction to TensorFlow. To learn more, please refer to the TensorFlow documentation and explore the many resources that are available online. ```

**TensorFlow Project Tutorial** This comprehensive tutorial provides a step-by-step guide to getting started with TensorFlow, an open-source machine learning library. It covers the basics of installing TensorFlow, creating a new Python project, and building your first machine learning model. The tutorial also includes sections on the uses of TensorFlow, the requirements for using it, and next steps for further learning. This tutorial is suitable for beginners who are new to TensorFlow and want to learn how to use it to build machine learning models. It provides clear and concise instructions, as well as code examples, to help you get started quickly.

DIY Game Console Using SSD1306 OLED Display and NodeMCU/Arduino UNO

DIY Game Console Using OLED Display and NodeMCU/Arduino UNO Have you ever wondered if you could create your own handheld game console? In th...