Hello guys, today i see one library in python with name OpenCV, this is a IA for recongite images on screen/webcam and more..
I think this is a good way to resolve problem with battleye service, because with python you can migrate a linux/osx app easy and when u debbug your code this generate another process name. here is my code, he's detect a item from png image (the item is cropped without borders and 15x15) and find cordinates in screen in x milisecond.
this is the teste.png for codes work, note read the firsts lines on code.
teste.png
I think this is a good way to resolve problem with battleye service, because with python you can migrate a linux/osx app easy and when u debbug your code this generate another process name. here is my code, he's detect a item from png image (the item is cropped without borders and 15x15) and find cordinates in screen in x milisecond.
Code:
import time import pyautogui import cv2 import numpy as np from PIL import ImageGrab #simple bot for detect Tibia item image on screen without inject in client #You need only crop the image of item in 15x15 (for work) #note i develope this for works on 1360x768 resolution if you need another change the bbox on ImageGrab().grab #for completly use u need adapt this with win32gui (for get window rect/position) #Thanks a lot. #_JMG/Sleepy22 def imagesearch(image, precision=0.8): im = ImageGrab.grab() img_rgb = np.array(im) img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY) w,h = image.shape[::-1] res = cv2.matchTemplate(img_gray, image, cv2.TM_CCOEFF_NORMED) min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res) if max_val < precision: return [-1,-1] return max_loc def imageinrange(image,precision,type): if type == "ground": im = ImageGrab.grab(bbox=(288, 85, 897, 529)) img_rgb = np.array(im) img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY) w, h = image.shape[::-1] res = cv2.matchTemplate(img_gray, image, cv2.TM_CCOEFF_NORMED) min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res) if max_val < precision: return [-1, -1] return max_loc[0]+288, max_loc[1]+85 else: im = ImageGrab.grab(bbox=(1188, 40, 1360, 726)) img_rgb = np.array(im) img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY) w, h = image.shape[::-1] res = cv2.matchTemplate(img_gray, image, cv2.TM_CCOEFF_NORMED) min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res) if max_val < precision: return [-1, -1] return max_loc[0]+1188,max_loc[1]+40 def getground(image): template = cv2.imread(image, 0) template = cv2.resize(template, dsize=(19, 19), interpolation=cv2.INTER_CUBIC) tp = imageinrange(template,0.8,'ground') return tp def getinventory(image): template = cv2.imread(image, 0) template = cv2.resize(template, dsize=(15, 15), interpolation=cv2.INTER_CUBIC) tp = imageinrange(template, 0.8,'inventory') return tp def getall(image): tp = getground(image) if tp == [-1,-1]: tp = getinventory(image) return tp time.sleep(3) p = getall("teste.png") pyautogui.moveTo(p[0],p[1])
teste.png
Comment