發表文章

目前顯示的是 1月, 2022的文章

Detectron2-物件辨識模型訓練

  註冊數據集 import json from detectron2.structures import BoxMode def get_board_dicts ( imgdir ): json_file = imgdir + "/dataset.json" #Fetch the json file with open ( json_file ) as f : dataset_dicts = json . load ( f ) for i in dataset_dicts : filename = i [ "file_name" ] i [ "file_name" ] = imgdir + "/" + filename for j in i [ "annotations" ]: j [ "bbox_mode" ] = BoxMode . XYWH_ABS #Setting the required Box Mode j [ "category_id" ] = int ( j [ "category_id" ]) return dataset_dicts from detectron2.data import DatasetCatalog , MetadataCatalog #Registering the Dataset for d in [ "train" , "val" ]: DatasetCatalog . register ( "boardetect_" + d , lambda d = d : get_board_dicts ( "Text_Detection_Dataset_COCO_Format/" + d )) MetadataCatal...

Detectron2-實例分割模型輸出

  模型輸出:pred_boxes、scores、pred_classes、pred_masks 此案例較常調用pred_classes(判斷是否overlap)和pred_masks(從面積預測重量) 編輯此區塊 可視化 v = Visualizer ( im [:, :, :: - 1 ], MetadataCatalog . get ( cfg . DATASETS . TRAIN [ 0 ]), scale = 1.2 ) out = v . draw_instance_predictions ( outputs [ "instances" ] . to ( "cpu" )) cv2_imshow ( out . get_image ()[:, :, :: - 1 ]) detectron2在可視化預設顏色是隨機的,在colormode使用SEGMENTATION僅能讓同一類別的實例具有相似的顏色 若需固定每種類別為固定顏色則須在.\detectron2\utils內visualizer.py更改 def overlay_instances ( self , * , boxes = None , labels = None , masks = None , keypoints = None , assigned_colors = None , alpha = 0.5 ): num_instances = None if boxes is not None : boxes = self . _convert_boxes ( boxes ) num_instances = len ( boxes ) if masks is not None : masks = self . _convert_masks ( masks ) if num_instance...

Detectron2-關鍵點模型輸出

  [須注意Detectron2輸出資料和coco標註資料有差異] 模型輸出:pred_boxes、scores、pred_classes、pred_keypoints 此案例較常調用pred_boxes(判斷是否過於靠近ROI)和pred_keypoints(判斷是否有危險動作or溺水) 編輯此區塊 調用box #提取bounding box Box = outputs [ "instances" ] . pred_boxes box = Box . tensor . cpu () box = box . numpy () 根據COCO格式輸出邊界框格式為[左上x 位置,左上y 位置,寬度,高度]。 而Detectron2為pred_boxes: x0_min, y0_min, x0_max, y0_max], [.......],[xn_min, yn_min, xn_max, yn_max 編輯此區塊 調用keypoint #提取keypoint key = outputs [ "instances" ] . pred_keypoints . cpu () Keypoints = key . data . numpy () 若畫面中僅一位人員,其keypoint輸出應為: [[[ 1.4401215e+02 6.2059574e+01 3.1292382e-01 ] [ 1.5147527e+02 5.4588280e+01 1.9459458e-01 ] [ 1.3752248e+02 5.4588280e+01 2.7202889e-01 ] [ 1.6997083e+02 6.0760220e+01 5.2661333e-02 ] [ 1.2746349e+02 6.0435383e+01 7.3776595e-02 ] [ 1.6380565e+02 9.6817345e+01 4.2605755e-01 ] [ 1.2778798e+02 9.8766380e+01 3.7693325e-01 ] [ 1.7224222e+02 1.2897641e+02 4.3600988e-01 ] [ 1.1351069e+02...

Detectron2-資料增強方法

  找到 detectron2\engine\defaults.py 客製化mapper 其中transform_list 為數據增強方法包含resize、翻轉、改變亮度等 更多強化方法可參考: https://detectron2.readthedocs.io/en/latest/modules/data_transforms.html import copy from detectron2.data import detection_utils as utils def mapper ( dataset_dict ) : # Implement a mapper, similar to the default DatasetMapper, but with your own customizations dataset_dict = copy.deepcopy ( dataset_dict ) # it will be modified by code below image = utils.read_image ( dataset_dict[ "file_name" ] , format = "BGR" ) transform_list = [ #T.Resize(300,300), #T.RandomCrop("absolute", (640, 640)), T.RandomSaturation ( 0.5,1.5 ) , T.RandomRotation ( angle =[ -90 .0, 90.0] ) , T.RandomLighting ( scale = 0.1 ) , T.RandomSaturation ( 0.75, 1.25 ) , T.RandomFlip ( prob = 0.5, horizontal = Fals...

Detectron2-條件過濾顯示物件

圖片預測完後會得到Instance,可透過條件式來取得特定的偵測物件,最後在由此Instance做視覺化即可過濾部分不要的物件,範例程式碼寫了部分條件過濾供參考。 outputs = predictor ( frame ) #特定類別 outputs[ "instances" ] = outputs[ "instances" ][ outputs[ "instances" ] .pred_classes == 3] #分數大於0.9 outputs[ "instances" ] = outputs[ "instances" ][ outputs[ "instances" ] .scores > 0.9] #boxes X1的座標值大於100 outputs[ "instances" ] = outputs[ "instances" ][ np.where ( outputs[ "instances" ] .pred_boxes.tensor.cpu () .numpy ()[ :,0]>100 )] v = Visualizer ( frame[:, :, ::-1], metadata = mydata_metadata, scale = 1, instance_mode = ColorMode.SEGMENTATION ) v = v.draw_instance_predictions ( outputs[ "instances" ] .to ( "cpu" )) Image.fromarray ( v.get_image ()[ :, :, ::-1] ) cv2.imwrite ( 'tmpfile/prediction_d_' + j + '.jpg' , v.get_image ()[ :, :, ::-1] )