發表文章

目前顯示的是 2021的文章

實例分割 模型訓練 InstanceSegmentation by detectron2

detectron2_repo_path   =   ""   #detectron2 path img_path = r"coco/images/train" json_path = r"coco/annotations/instances_train.json" register_coco_instances ( "mydata" , {}, json_path , img_path ) mydata_metadata = MetadataCatalog . get ( "mydata" ) dataset_dicts = DatasetCatalog . get ( "mydata" ) # test data val_img_path = r"coco/images/val2017" val_json_path = r"/coco/annotations/instances_val2017.json" register_coco_instances ( "testdata" , {}, val_json_path , val_img_path ) mydata_metadata = MetadataCatalog . get ( "testdata" ) dataset_dicts = DatasetCatalog . get ( "testdata" ) #註冊自己的數據集 cfg = get_cfg () cfg . merge_from_file ( os . path . join ( detectron2_repo_path , "configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml" ) ) # github有提供不同骨幹:FPN,C4,DC5, cfg . DATASETS . TRAIN = ( &quo

Windows環境安裝Detectron2(Install detectron2 in windows)

  Download anaconda 64-bit and install it https://www.anaconda.com/products/individual Open anaconda prompt and type script conda create --name detectron2 python=3.7 Y install related packages conda activate detectron2 use detectron2 env. Download from PyPi and unzip https://files.pythonhosted.org/packages/5c/82/bcaf4d21d7027fe5165b88e3aef1910a36ed02c3e99d3385d1322ea0ba29/pycocotools-2.0.1.tar.gz Unzip twice Enter pycocotools-2.0.1 folder and open setup.py Replace  extra_compile_args=['-Wno-cpp', '-Wno-unused-function', '-std=c99’],  with  extra_compile_args={'gcc': ['/Qstd=c99']}, Save file Anaconda prompt script cd C:\Users\chase\Downloads\dist\pycocotools-2.0.1 python setup.py build_ext install Must required install Visual Studio Build Tools If it works, you should see the info Finished processing dependencies for pycocotools==2.0.1 cd .. RMDIR /S pycocotools-2.0.1 Install PyTorch and torchvision conda install pytorch torchvision cudatoolkit=10.1

MLFLOW 快速教學、Tracking Server建立

圖片
  整體架構可參考上圖,mlflow可以針對你模型訓練的參數、loss與model進行追蹤與管理的工具,也提供deploy api來對續的部署進行串接。本文將會快速地介紹如何建立一個tracking server,若只是要本地端的試用,直接參考官方文件的quickstart即可。 其架構大致如上圖,client訓練模型可以log參數、loss、把訓練好的模型上傳,參數部分可存到本地端、或資料庫,model需要有文件儲存空間,像是S3、Azure blob、FTP server等等。 Server設定 Conda 環境下安裝 python 套件 pip install mlflow pip install azure-storage-blob pip install mysqlclient 安裝 MYSQL 建立 user create user 'mlflow'@'%' identified by 'chase0024’; Blob 設定環境變數 Windows(cmd) setx AZURE_STORAGE_CONNECTION_STRING "<yourconnectionstring>" setx AZURE_STORAGE_ACCESS_KEY "<youraccesskey>" Linux(bash) export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>" export AZURE_STORAGE_ACCESS_KEY "<youraccesskey>" 啟動 Server mlflow server --backend-store-uri mysql://dbname:password@127.0.0.1:3306/mlflow --default-artifact-root wasbs://containername@blobaccount.blob.core.windows.net/ --host=0.0.0.0   Client設定 Conda 環境下安