<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: nima</title>
    <description>The latest articles on DEV Community by nima (@nimadorostkar).</description>
    <link>https://dev.to/nimadorostkar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nimadorostkar"/>
    <language>en</language>
    <item>
      <title>YOLOV3 &amp; Tensorflow object detection and report human movements in persian</title>
      <dc:creator>nima</dc:creator>
      <pubDate>Sun, 11 Apr 2021 10:43:39 +0000</pubDate>
      <link>https://dev.to/nimadorostkar/yolov3-tensorflow-object-detection-and-report-human-movements-in-persian-176d</link>
      <guid>https://dev.to/nimadorostkar/yolov3-tensorflow-object-detection-and-report-human-movements-in-persian-176d</guid>
      <description>&lt;p&gt;Yolov3 is an algorithm that uses deep convolutional neural networks to perform object detection. This repository implements Yolov3 using TensorFlow&lt;/p&gt;

&lt;p&gt;you can access to this repository on Github:&lt;br&gt;
&lt;a href="https://github.com/nimadorostkar/human-detection"&gt;https://github.com/nimadorostkar/human-detection&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;الگوریتم‌های مختلفی برای پیاده‌سازی سیستم تشخیص اشیا در نظر گرفته شدند، اما در نهایت، الگوریتم YOLO به عنوان الگوریتم اصلی بر پیاده‌سازی این سیستم در نظر گرفته شد. دلیل انتخاب الگوریتم YOLO، سرعت بالا و قدرت محاسباتی آن و همچنین، وجود منابع آموزشی زیاد برای راهنمایی کاربران هنگام پیاده‌سازی این الگوریتم است.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_hjGIGrr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gi185bx2col8reas4te2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_hjGIGrr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gi185bx2col8reas4te2.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Getting started&lt;br&gt;
Pip&lt;br&gt;
pip install -r requirements.txt&lt;br&gt;
Downloading official pretrained weights&lt;br&gt;
از لینک های زیر dataset رو میتونید دانلود کنید&lt;br&gt;
For Linux: Let’s download official yolov3 weights pretrained on COCO dataset.&lt;/p&gt;

&lt;h1&gt;
  
  
  yolov3
&lt;/h1&gt;

&lt;p&gt;wget &lt;a href="https://pjreddie.com/media/files/yolov3.weights"&gt;https://pjreddie.com/media/files/yolov3.weights&lt;/a&gt; -O weights/yolov3.weights&lt;/p&gt;

&lt;h1&gt;
  
  
  yolov3-tiny
&lt;/h1&gt;

&lt;p&gt;wget &lt;a href="https://pjreddie.com/media/files/yolov3-tiny.weights"&gt;https://pjreddie.com/media/files/yolov3-tiny.weights&lt;/a&gt; -O weights/yolov3-tiny.weights&lt;br&gt;
For Windows: You can download the yolov3 weights by clicking here and yolov3-tiny here then save them to the weights folder.&lt;br&gt;
Saving your yolov3 weights as a TensorFlow model.&lt;br&gt;
Load the weights using load_weights.py script. This will convert the yolov3 weights into TensorFlow .ckpt model files!&lt;/p&gt;

&lt;h1&gt;
  
  
  yolov3
&lt;/h1&gt;

&lt;p&gt;python load_weights.py&lt;/p&gt;

&lt;h1&gt;
  
  
  yolov3-tiny
&lt;/h1&gt;

&lt;p&gt;python load_weights.py --weights ./weights/yolov3-tiny.weights --output ./weights/yolov3-tiny.tf --tiny&lt;br&gt;
After executing one of the above lines, you should see .tf files in your weights folder.&lt;br&gt;
Running just the TensorFlow model&lt;br&gt;
The tensorflow model can also be run not using the APIs but through using detect.py script.&lt;br&gt;
Don’t forget to set the IoU (Intersection over Union) and Confidence Thresholds within your yolov3-tf2/models.py file&lt;br&gt;
Usage examples&lt;br&gt;
Let’s run an example or two using sample images found within the data/images folder.&lt;/p&gt;

&lt;h1&gt;
  
  
  yolov3
&lt;/h1&gt;

&lt;p&gt;python detect.py --images "data/images/dog.jpg, data/images/office.jpg"&lt;/p&gt;

&lt;h1&gt;
  
  
  yolov3-tiny
&lt;/h1&gt;

&lt;p&gt;python detect.py --weights ./weights/yolov3-tiny.tf --tiny --images "data/images/dog.jpg"&lt;/p&gt;

&lt;h1&gt;
  
  
  webcam
&lt;/h1&gt;

&lt;p&gt;python detect_video.py --video 0&lt;/p&gt;

&lt;h1&gt;
  
  
  video file
&lt;/h1&gt;

&lt;p&gt;python detect_video.py --video data/video/paris.mp4 --weights ./weights/yolov3-tiny.tf --tiny&lt;/p&gt;

&lt;h1&gt;
  
  
  video file with output saved (can save webcam like this too)
&lt;/h1&gt;

&lt;p&gt;python detect_video.py --video path_to_file.mp4 --output ./detections/output.avi&lt;br&gt;
Then you can find the detections in the detections folder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iDdw0D0---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o0yg6ps6gupo3w4kcpu1.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iDdw0D0---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o0yg6ps6gupo3w4kcpu1.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;حتما اینو امتحان کنید&lt;br&gt;
با دستور زیر به صورت real-time ویدیو از وبکم گرفته میشه و object های تصویر تحلیل میشه و اگه انسان شناسایی بشه به صورت صوتی اعلام میشه.&lt;br&gt;
python detect_video.py --video 0&lt;br&gt;
&lt;a href="https://github.com/nimadorostkar/human-detection"&gt;https://github.com/nimadorostkar/human-detection&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>yolov3</category>
      <category>tensorflow</category>
      <category>objectdetection</category>
    </item>
    <item>
      <title>Real Time Lane Detection — python opencv</title>
      <dc:creator>nima</dc:creator>
      <pubDate>Sun, 11 Apr 2021 10:36:31 +0000</pubDate>
      <link>https://dev.to/nimadorostkar/real-time-lane-detection-python-opencv-5nn</link>
      <guid>https://dev.to/nimadorostkar/real-time-lane-detection-python-opencv-5nn</guid>
      <description>&lt;p&gt;Overview&lt;br&gt;
Lane detection is one of the most crucial technique of ADAS and has received significant attention recently. In this project, we achived lane detection with real time by numpy and multi-thread.&lt;/p&gt;

&lt;p&gt;Dependencies:&lt;/p&gt;

&lt;p&gt;Python&lt;br&gt;
Numpy&lt;br&gt;
Opencv&lt;/p&gt;

&lt;p&gt;How to Run:&lt;/p&gt;

&lt;p&gt;Run lane_detection.py. The default video is project_video, if you want to process the "fog_video.mp4", change video_index to 1 in line 9.&lt;/p&gt;

&lt;p&gt;full source code:&lt;br&gt;
import numpy as np&lt;br&gt;
import cv2&lt;br&gt;
import time&lt;br&gt;
from threading import Thread&lt;br&gt;
from queue import Queue&lt;/p&gt;

&lt;h1&gt;
  
  
  defualt video number, if you want to process the "fog_video.mp4", change video_index to 1
&lt;/h1&gt;

&lt;p&gt;video_index = 0&lt;/p&gt;

&lt;h1&gt;
  
  
  the result of lane detection, we add the road to the main frame
&lt;/h1&gt;

&lt;p&gt;road = np.zeros((720, 1280, 3))&lt;/p&gt;

&lt;h1&gt;
  
  
  A flag which means the process is started
&lt;/h1&gt;

&lt;p&gt;started = 0&lt;/p&gt;

&lt;h1&gt;
  
  
  Pipeline combining color and gradient thresholding
&lt;/h1&gt;

&lt;p&gt;def thresholding_pipeline(img, s_thresh=(90, 255), sxy_thresh=(20, 100)):&lt;br&gt;
img = np.copy(img)&lt;br&gt;
    # 1: Convert to HSV color space and separate the V channel&lt;br&gt;
    hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS).astype(np.float)&lt;br&gt;
    h_channel = hls[:, :, 0]&lt;br&gt;
    l_channel = hls[:, :, 1]&lt;br&gt;
    s_channel = hls[:, :, 2]&lt;/p&gt;

&lt;h1&gt;
  
  
  2: Calculate x directional gradient
&lt;/h1&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sobelx = cv2.Sobel(l_channel, cv2.CV_64F, 1, 0)  # Take the derivative in x
# Absolute x derivative to accentuate lines away from horizontal
abs_sobelx = np.absolute(sobelx)
scaled_sobelx = np.uint8(255 * abs_sobelx / np.max(abs_sobelx))
sxbinary = np.zeros_like(scaled_sobelx)
sxbinary[(scaled_sobelx &amp;gt;= sxy_thresh[0]) &amp;amp;
         (scaled_sobelx &amp;lt;= sxy_thresh[1])] = 1
grad_thresh = sxbinary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  3: Color Threshold of s channel
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;s_binary = np.zeros_like(s_channel)
s_binary[(s_channel &amp;gt;= s_thresh[0]) &amp;amp; (s_channel &amp;lt;= s_thresh[1])] = 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  4: Combine the two binary thresholds
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;combined_binary = np.zeros_like(grad_thresh)
combined_binary[(s_binary == 1) | (grad_thresh == 1)] = 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;return combined_binary&lt;/p&gt;

&lt;h1&gt;
  
  
  Apply perspective transformation to bird's eye view
&lt;/h1&gt;

&lt;p&gt;def perspective_transform(img, src_mask, dst_mask):&lt;br&gt;
img_size = (img.shape[1], img.shape[0])&lt;br&gt;
    src = np.float32(src_mask)&lt;br&gt;
    dst = np.float32(dst_mask)&lt;br&gt;
    M = cv2.getPerspectiveTransform(src, dst)&lt;br&gt;
    warped_img = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)&lt;br&gt;
    return warped_img&lt;/p&gt;

&lt;h1&gt;
  
  
  Implement Sliding Windows and Fit a Polynomial
&lt;/h1&gt;

&lt;p&gt;def sliding_windows(binary_warped, nwindows=9):&lt;br&gt;
histogram = np.sum(&lt;br&gt;
        binary_warped[int(binary_warped.shape[0]/2):, :], axis=0)&lt;/p&gt;

&lt;h1&gt;
  
  
  Find the peak of the left and right halves of the histogram
&lt;/h1&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]/2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Set height of windows
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;window_height = np.int(binary_warped.shape[0]/nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated for each window
leftx_current = leftx_base
rightx_current = rightx_base
# Set the width of the windows +/- margin
margin = 100
# Set minimum number of pixels found to recenter window
minpix = 50
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Step through the windows one by one
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for window in range(nwindows):
    # Identify window boundaries in x and y (and right and left)
    win_y_low = binary_warped.shape[0] - (window+1)*window_height
    win_y_high = binary_warped.shape[0] - window*window_height
    win_xleft_low = leftx_current - margin
    win_xleft_high = leftx_current + margin
    win_xright_low = rightx_current - margin
    win_xright_high = rightx_current + margin
    # Identify the nonzero pixels in x and y within the window
    good_left_inds = ((nonzeroy &amp;gt;= win_y_low) &amp;amp; (nonzeroy &amp;lt; win_y_high) &amp;amp; (
        nonzerox &amp;gt;= win_xleft_low) &amp;amp; (nonzerox &amp;lt; win_xleft_high)).nonzero()[0]
    good_right_inds = ((nonzeroy &amp;gt;= win_y_low) &amp;amp; (nonzeroy &amp;lt; win_y_high) &amp;amp; (
        nonzerox &amp;gt;= win_xright_low) &amp;amp; (nonzerox &amp;lt; win_xright_high)).nonzero()[0]
    # Append these indices to the lists
    left_lane_inds.append(good_left_inds)
    right_lane_inds.append(good_right_inds)
    # If you found &amp;gt; minpix pixels, recenter next window on their mean position
    if len(good_left_inds) &amp;gt; minpix:
        leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
    if len(good_right_inds) &amp;gt; minpix:
        rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Concatenate the arrays of indices
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Extract left and right line pixel positions
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Fit a second order polynomial to each
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;return left_fit, right_fit, lefty, leftx, righty, rightx&lt;/p&gt;

&lt;h1&gt;
  
  
  Warp lane line projection back to original image
&lt;/h1&gt;

&lt;p&gt;def project_lanelines(binary_warped, orig_img, left_fit, right_fit, dst_mask, src_mask):&lt;br&gt;
global road&lt;br&gt;
    global started&lt;br&gt;
    ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0])&lt;br&gt;
    left_fitx = left_fit[0]&lt;em&gt;ploty&lt;/em&gt;&lt;em&gt;2 + left_fit[1]*ploty + left_fit[2]&lt;br&gt;
    right_fitx = right_fit[0]*ploty&lt;/em&gt;*2 + right_fit[1]*ploty + right_fit[2]&lt;/p&gt;

&lt;h1&gt;
  
  
  Create an image to draw the lines on
&lt;/h1&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;warp_zero = np.zeros_like(binary_warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Recast the x and y points into usable format for cv2.fillPoly()
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array(
    [np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Draw the lane onto the warped blank image
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cv2.fillPoly(color_warp, np.int_([pts]), (0, 255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
warped_inv = perspective_transform(color_warp, dst_mask, src_mask)
road = warped_inv
started = 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Main process functions
&lt;/h1&gt;

&lt;p&gt;def main_pipeline(input):&lt;/p&gt;
&lt;h1&gt;
  
  
  step 1 select the ROI, and we need to distort the image for fog_video
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if video_index == 0:
    image = input
    top_left = [540, 460]
    top_right = [754, 460]
    bottom_right = [1190, 670]
    bottom_left = [160, 670]
else:
    mtx = np.array([[1.15396467e+03, 0.00000000e+00, 6.69708251e+02], [0.00000000e+00, 1.14802823e+03, 3.85661017e+02],
                    [0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
    dist = np.array([[-2.41026561e-01, -5.30262184e-02, -
                      1.15775369e-03, -1.27924043e-04, 2.66417032e-02]])
    image = cv2.undistort(input, mtx, dist, None, mtx)
    top_left = [240, 270]
    top_right = [385, 270]
    bottom_right = [685, 402]
    bottom_left = [0, 402]
src_mask = np.array([[(top_left[0], top_left[1]), (top_right[0], top_right[1]),
                      (bottom_right[0], bottom_right[1]), (bottom_left[0], bottom_left[1])]], np.int32)
dst_mask = np.array([[(bottom_left[0], 0), (bottom_right[0], 0),
                      (bottom_right[0], bottom_right[1]), (bottom_left[0], bottom_left[1])]], np.int32)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Step 2 Thresholding: color and gradient thresholds to generate a binary image
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;binary_image = thresholding_pipeline(image, s_thresh=(90, 255))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Step 3 Perspective transform on binary image:
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;binary_warped = perspective_transform(binary_image, src_mask, dst_mask)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Step 4 Fit Polynomial
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;left_fit, right_fit, lefty, leftx, righty, rightx = sliding_windows(
    binary_warped, nwindows=9)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Step 5 Project Lines
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;project_lanelines(binary_warped, image, left_fit,
                  right_fit, dst_mask, src_mask)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;if &lt;strong&gt;name&lt;/strong&gt; == '&lt;strong&gt;main&lt;/strong&gt;':&lt;br&gt;
frames_counts = 1&lt;br&gt;
    if video_index == 0:&lt;br&gt;
        cap = cv2.VideoCapture('project_video.mp4')&lt;br&gt;
    else:&lt;br&gt;
        cap = cv2.VideoCapture('fog_video.mp4')&lt;br&gt;
class MyThread(Thread):&lt;br&gt;
def &lt;strong&gt;init&lt;/strong&gt;(self, q):&lt;br&gt;
            Thread.&lt;strong&gt;init&lt;/strong&gt;(self)&lt;br&gt;
            self.q = q&lt;br&gt;
def run(self):&lt;br&gt;
            while(1):&lt;br&gt;
                if (not self.q.empty()):&lt;br&gt;
                    image = self.q.get()&lt;br&gt;
                    main_pipeline(image)&lt;br&gt;
q = Queue()&lt;br&gt;
    q.queue.clear()&lt;br&gt;
    thd1 = MyThread(q)&lt;br&gt;
    thd1.setDaemon(True)&lt;br&gt;
    thd1.start()&lt;br&gt;
while (True):&lt;br&gt;
        start = time.time()&lt;br&gt;
        ret, frame = cap.read()&lt;/p&gt;

&lt;h1&gt;
  
  
  Detect the lane every 5 frames
&lt;/h1&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    if frames_counts % 5 == 0:
        q.put(frame)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Add the lane image on the original frame if started
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    if started:
        frame = cv2.addWeighted(frame, 1, road, 0.5, 0)
    cv2.imshow("RealTime_lane_detection", frame)
    if cv2.waitKey(1) &amp;amp; 0xFF == ord('q'):
        break
    frames_counts += 1
    cv2.waitKey(12)
    finish = time.time()
    print('FPS:  ' + str(int(1/(finish-start))))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;cap.release()&lt;br&gt;
    cv2.destroyAllWindows()&lt;/p&gt;

&lt;p&gt;Github link:&lt;br&gt;
&lt;a href="https://github.com/nimadorostkar/realtime_lane_detection"&gt;https://github.com/nimadorostkar/realtime_lane_detection&lt;/a&gt;&lt;/p&gt;

</description>
      <category>numpy</category>
      <category>python</category>
      <category>opencv</category>
      <category>lanedetection</category>
    </item>
    <item>
      <title>Serial port communication in C</title>
      <dc:creator>nima</dc:creator>
      <pubDate>Sun, 11 Apr 2021 10:32:48 +0000</pubDate>
      <link>https://dev.to/nimadorostkar/serial-port-communication-in-c-4633</link>
      <guid>https://dev.to/nimadorostkar/serial-port-communication-in-c-4633</guid>
      <description>&lt;p&gt;github link for this project:&lt;br&gt;
&lt;a href="https://github.com/atrotech/comtest"&gt;https://github.com/atrotech/comtest&lt;/a&gt;&lt;br&gt;
How to build&lt;/p&gt;

&lt;h1&gt;
  
  
  git clone &lt;a href="https://github.com/atrotech/comtest.git"&gt;https://github.com/atrotech/comtest.git&lt;/a&gt;
&lt;/h1&gt;

&lt;h1&gt;
  
  
  cd comtest
&lt;/h1&gt;

&lt;h1&gt;
  
  
  gcc -o comtest comtest.c
&lt;/h1&gt;

&lt;p&gt;Usage&lt;br&gt;
./comtest -d /dev/ttyAMA3 -s 38400&lt;br&gt;
Parameters&lt;/p&gt;

&lt;h1&gt;
  
  
  ./comtest --help
&lt;/h1&gt;

&lt;p&gt;comtest - interactive program of comm port&lt;br&gt;
press [ESC] 3 times to quit&lt;br&gt;
Usage: comtest [-d device] [-t tty] [-s speed] [-7] [-c] [-x] [-o] [-h]&lt;br&gt;
         -7 7 bit&lt;br&gt;
         -x hex mode&lt;br&gt;
         -o output to stdout too&lt;br&gt;
         -c stdout output use color&lt;br&gt;
         -h print this help&lt;br&gt;
comtest.c :&lt;/p&gt;

&lt;h1&gt;
  
  
  include 
&lt;/h1&gt;

&lt;h1&gt;
  
  
  include 
&lt;/h1&gt;

&lt;h1&gt;
  
  
  include 
&lt;/h1&gt;

&lt;h1&gt;
  
  
  include 
&lt;/h1&gt;

&lt;h1&gt;
  
  
  include 
&lt;/h1&gt;

&lt;h1&gt;
  
  
  include 
&lt;/h1&gt;

&lt;h1&gt;
  
  
  include 
&lt;/h1&gt;

&lt;h1&gt;
  
  
  include 
&lt;/h1&gt;

&lt;h1&gt;
  
  
  include 
&lt;/h1&gt;

&lt;p&gt;static int SerialSpeed(const char *SpeedString)&lt;br&gt;
{&lt;br&gt;
    int SpeedNumber = atoi(SpeedString);&lt;/p&gt;

&lt;h1&gt;
  
  
  define TestSpeed(Speed) if (SpeedNumber == Speed) return B##Speed
&lt;/h1&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TestSpeed(1200);
TestSpeed(2400);
TestSpeed(4800);
TestSpeed(9600);
TestSpeed(19200);
TestSpeed(38400);
TestSpeed(57600);
TestSpeed(115200);
TestSpeed(230400);
return -1;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;br&gt;
static inline void WaitFdWriteable(int Fd)&lt;br&gt;
{&lt;br&gt;
    fd_set WriteSetFD;&lt;br&gt;
    FD_ZERO(&amp;amp;WriteSetFD);&lt;br&gt;
    FD_SET(Fd, &amp;amp;WriteSetFD);&lt;br&gt;
    if (select(Fd + 1, NULL, &amp;amp;WriteSetFD, NULL, NULL) &amp;lt; 0) {&lt;br&gt;
   printf("%s",strerror(errno));&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;br&gt;
int main(int argc, char **argv)&lt;br&gt;
{&lt;br&gt;
    int CommFd, TtyFd;&lt;br&gt;
struct termios TtyAttr;&lt;br&gt;
    struct termios BackupTtyAttr;&lt;br&gt;
int DeviceSpeed = B38400;&lt;br&gt;
    int TtySpeed = B38400;&lt;br&gt;
    int ByteBits = CS8;&lt;br&gt;
    const char *DeviceName = "/dev/ttyAMA3";&lt;br&gt;
    const char *TtyName = "/dev/tty";&lt;br&gt;
    int OutputHex = 0;&lt;br&gt;
    int OutputToStdout = 0;&lt;br&gt;
    int UseColor = 0;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CommFd = open(DeviceName, O_RDWR, 0);

if (fcntl(CommFd, F_SETFL, O_NONBLOCK) &amp;lt; 0)
  printf("Unable set to NONBLOCK mode");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;memset(&amp;amp;TtyAttr, 0, sizeof(struct termios));&lt;br&gt;
    TtyAttr.c_iflag = IGNPAR;&lt;br&gt;
    TtyAttr.c_cflag = DeviceSpeed | HUPCL | ByteBits | CREAD | CLOCAL;&lt;br&gt;
    TtyAttr.c_cc[VMIN] = 1;&lt;br&gt;
if (tcsetattr(CommFd, TCSANOW, &amp;amp;TtyAttr) &amp;lt; 0)&lt;br&gt;
        printf("Unable to set comm port");&lt;br&gt;
TtyFd = open(TtyName, O_RDWR | O_NDELAY, 0);&lt;br&gt;
TtyAttr.c_cflag = TtySpeed | HUPCL | ByteBits | CREAD | CLOCAL;&lt;br&gt;
    if (tcgetattr(TtyFd, &amp;amp;BackupTtyAttr) &amp;lt; 0)&lt;br&gt;
 printf("Unable to get tty");&lt;br&gt;
if (tcsetattr(TtyFd, TCSANOW, &amp;amp;TtyAttr) &amp;lt; 0)&lt;br&gt;
 printf("Unable to set tty");&lt;br&gt;
for (;;) {&lt;br&gt;
 unsigned char Char = 0;&lt;br&gt;
 fd_set ReadSetFD;&lt;br&gt;
void OutputStdChar(FILE *File) {&lt;br&gt;
     char Buffer[10];&lt;br&gt;
     int Len = sprintf(Buffer, OutputHex ? "%.2X  " : "%c", Char);&lt;br&gt;
     fwrite(Buffer, 1, Len, File);&lt;br&gt;
 }&lt;br&gt;
FD_ZERO(&amp;amp;ReadSetFD);&lt;br&gt;
FD_SET(CommFd, &amp;amp;ReadSetFD);&lt;br&gt;
 FD_SET( TtyFd, &amp;amp;ReadSetFD);&lt;/p&gt;

&lt;h1&gt;
  
  
  define max(x,y) ( ((x) &amp;gt;= (y)) ? (x) : (y) )
&lt;/h1&gt;

&lt;p&gt;if (select(max(CommFd, TtyFd) + 1, &amp;amp;ReadSetFD, NULL, NULL, NULL) &amp;lt; 0) {&lt;br&gt;
     printf("%s",strerror(errno));&lt;br&gt;
 }&lt;/p&gt;

&lt;h1&gt;
  
  
  undef max
&lt;/h1&gt;

&lt;p&gt;if (FD_ISSET(CommFd, &amp;amp;ReadSetFD)) {&lt;br&gt;
     while (read(CommFd, &amp;amp;Char, 1) == 1) {&lt;br&gt;
WaitFdWriteable(TtyFd);&lt;br&gt;
  if (write(TtyFd, &amp;amp;Char, 1) &amp;lt; 0) {&lt;br&gt;
        printf("%s",strerror(errno));&lt;br&gt;
  }&lt;br&gt;
  if (OutputToStdout) {&lt;br&gt;
      if (UseColor)&lt;br&gt;
   fwrite("\x1b[01;34m", 1, 8, stdout);&lt;br&gt;
      OutputStdChar(stdout);&lt;br&gt;
      if (UseColor)&lt;br&gt;
   fwrite("\x1b[00m", 1, 8, stdout);&lt;br&gt;
      fflush(stdout);&lt;br&gt;
  }&lt;br&gt;
     }&lt;br&gt;
 }&lt;br&gt;
if (FD_ISSET(TtyFd, &amp;amp;ReadSetFD)) {&lt;br&gt;
     while (read(TtyFd, &amp;amp;Char, 1) == 1) {&lt;br&gt;
         static int EscKeyCount = 0;&lt;br&gt;
  WaitFdWriteable(CommFd);&lt;br&gt;
         if (write(CommFd, &amp;amp;Char, 1) &amp;lt; 0) {&lt;br&gt;
        printf("%s",strerror(errno));&lt;br&gt;
  }&lt;br&gt;
  if (OutputToStdout) {&lt;br&gt;
      if (UseColor)&lt;br&gt;
   fwrite("\x1b[01;31m", 1, 8, stderr);&lt;br&gt;
      OutputStdChar(stderr);&lt;br&gt;
      if (UseColor)&lt;br&gt;
   fwrite("\x1b[00m", 1, 8, stderr);&lt;br&gt;
      fflush(stderr);&lt;br&gt;
         }&lt;br&gt;
if (Char == '\x1b') {&lt;br&gt;
                    EscKeyCount ++;&lt;br&gt;
                    if (EscKeyCount &amp;gt;= 3)&lt;br&gt;
                        goto ExitLabel;&lt;br&gt;
                } else&lt;br&gt;
                    EscKeyCount = 0;&lt;br&gt;
     }&lt;br&gt;
        }&lt;br&gt;
}&lt;br&gt;
ExitLabel:&lt;br&gt;
    if (tcsetattr(TtyFd, TCSANOW, &amp;amp;BackupTtyAttr) &amp;lt; 0)&lt;br&gt;
 printf("Unable to set tty");&lt;br&gt;
return 0;&lt;br&gt;
}&lt;br&gt;
Makefile&lt;br&gt;
CROSS=arm-linux-&lt;br&gt;
all: armcomtest x86comtest&lt;br&gt;
armcomtest: comtest.c&lt;br&gt;
 $(CROSS)gcc -Wall -O3 -o armcomtest comtest.c&lt;br&gt;
x86comtest: comtest.c&lt;br&gt;
 gcc -o x86comtest comtest.c&lt;br&gt;
clean:&lt;br&gt;
 &lt;a class="comment-mentioned-user" href="https://dev.to/rm"&gt;@rm&lt;/a&gt;
 -vf armcomtest x86comtest *.o *~&lt;/p&gt;

</description>
      <category>c</category>
      <category>cpp</category>
      <category>serial</category>
      <category>serialport</category>
    </item>
  </channel>
</rss>
