Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I cannot open the camera in webrtc in this error always occur! #1260

Open
FourthSyte opened this issue May 22, 2023 · 1 comment
Open

I cannot open the camera in webrtc in this error always occur! #1260

FourthSyte opened this issue May 22, 2023 · 1 comment

Comments

@FourthSyte
Copy link

Exception in callback Transaction.__retry()
handle: <TimerHandle when=3007.116418333 Transaction.__retry()>
Traceback (most recent call last):
File "/usr/local/lib/python3.9/asyncio/selector_events.py", line 1054, in sendto
self._sock.sendto(data, addr)
AttributeError: 'NoneType' object has no attribute 'sendto'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/appuser/venv/lib/python3.9/site-packages/aioice/stun.py", line 312, in __retry
self.__protocol.send_stun(self.__request, self.__addr)
File "/home/appuser/venv/lib/python3.9/site-packages/aioice/ice.py", line 266, in send_stun
self.transport.sendto(bytes(message), addr)
File "/usr/local/lib/python3.9/asyncio/selector_events.py", line 1064, in sendto
self._fatal_error(
File "/usr/local/lib/python3.9/asyncio/selector_events.py", line 711, in _fatal_error
self._loop.call_exception_handler({
AttributeError: 'NoneType' object has no attribute 'call_exception_handler'

This error, always happening, I don't know how to fix it. To give a background about my app, it is a sign language detection that uses a laptop camera to detect gestures. Unfortunately, the camera is not opening and this error always popping up, please help me resolve this matter.

here is my code
'import streamlit as st'
from PIL import Image
from streamlit_webrtc import webrtc_streamer
import av
import cv2
from cvzone.HandTrackingModule import HandDetector
from cvzone.ClassificationModule import Classifier
import numpy as np
import math

st.set_page_config(page_title="Detection",
layout='centered',
page_icon='./images/sign-language.png')

st.title("Sign Language Detection")
st.caption("This web demonstrate Sign Language Detection")

detector = HandDetector(maxHands=2)
classifier = Classifier('Model/keras_model.h5',
'Model/labels.txt')

offset = 20
imgsize = 300
labels = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P",
"Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", "Hello", "Yes", "No", "ILoveYou", "ThankYou",
"Eat", "Drink", "Like", "Wrong"]

def video_frame_callback(frame):
img = frame.to_ndarray(format="bgr24")
imgoutput = img.copy()
hands, img = detector.findHands(img)

if hands:
    hand = hands[0]
    x, y, w, h = hand['bbox']
    imgwhite = np.ones((imgsize, imgsize, 3), np.uint8) * 255
    imgcrop = img[y - offset:y + h + offset, x - offset:x + w + offset]
    aspectratio = h / w

    if aspectratio > 1:
        k = imgsize / h
        wcal = math.ceil(k * w)
        imgresize = cv2.resize(imgcrop, (wcal, imgsize))
        wgap = math.ceil((imgsize - wcal) / 2)
        imgwhite[:, wgap:wcal + wgap] = imgresize
        prediction, index = classifier.getPrediction(imgwhite, draw=False)
    else:
        k = imgsize / w
        hcal = math.ceil(k * h)
        imgresize = cv2.resize(imgcrop, (imgsize, hcal))
        hgap = math.ceil((imgsize - hcal) / 2)
        imgwhite[hgap:hcal + hgap, :] = imgresize
        prediction, index = classifier.getPrediction(imgwhite, draw=False)

    cv2.putText(imgoutput, labels[index], (x, y - 20), cv2.FONT_HERSHEY_COMPLEX, 2, (255, 0, 255), 2)
    cv2.rectangle(imgoutput, (x - offset, y - offset),
                  (x + w + offset, y + h + offset), (255, 0, 255), 2)

return av.VideoFrame.from_ndarray(imgoutput, format="bgr24")

webrtc_streamer(key="example", video_frame_callback=video_frame_callback,
media_stream_constraints={"video": True, "audio": False})

with st.container():
st.write("---")

st.markdown("""

Reminder:

For accurate sign language detection, please ensure the following

  1. Position the gestures correctly within the camera frame.
  2. Use a high-definition (HD/2k) camera for capturing clear and detailed visuals.
  3. Ensure good lighting conditions in your environment to enhance visibility.
  4. Maintain a clutter-free and distraction-free environment.

These steps will help optimize the accuracy and performance of the sign language detection feature. Thank you for your cooperation!

""")

with st.container():
st.write("---")

image_paths = ['images/1.png', 'images/2.png', 'images/3.png', 'images/4.png', 'images/5.png', 'images/6.png',
'images/7.png', 'images/8.png', 'images/9.png', 'images/10.png',
'images/11.png', 'images/12.png', 'images/13.png', 'images/14.png', 'images/15.png', 'images/16.png',
'images/17.png', 'images/18.png', 'images/19.png', 'images/20.png',
'images/21.png', 'images/22.png', 'images/23.png', 'images/24.png', 'images/25.png', 'images/26.png',
'images/27.png', 'images/28.png', 'images/29.png', 'images/30.png',
'images/31.png', 'images/32.png', 'images/33.png', 'images/34.png', 'images/35.png', 'images/36.png']

st.title("Detectable gestures by the detection system")
st.caption("Presented below are a series of example gestures that can serve as a guide for effectively "
"utilizing the system. "
"This detection system is capable of recognizing alphabets and a selection of words.")

Initialize slideshow index and total number of images

slideshow_index = st.session_state.get('slideshow_index', 0)
num_images = len(image_paths)

Display the current image

def display_image(image_index):
image_path = image_paths[image_index]
image = Image.open(image_path)
st.image(image, use_column_width=True)

Create a layout for slideshow with navigation buttons

col1, col2, col3 = st.columns([1, 10, 1])

Add navigation buttons

if col1.button('⬅️') and slideshow_index > 0:
slideshow_index -= 1

with col3:
if col3.button('➡️') and slideshow_index < num_images - 1:
slideshow_index += 1

Save the current index to session state

st.session_state['slideshow_index'] = slideshow_index

Display the current image

display_image(slideshow_index)

For better references here is my repo link
https://github.com/FourthSyte/ASL-Detection.git

@AIOnGraph
Copy link

You can use turn server to resolve this issue. There is third party website ExpressTurn which gives you turn server id and credentials.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants