2024年2月17日 星期六

1642. Furthest Building You Can Reach

解題思路

我們可以用的有 ladder (無視兩鄉鄰建築物落差) 跟 brick。很直覺的是,我們肯定是把落差最大的那幾次用 ladder,其他幾次用 brick。

所以可以得知下面的公式:

落差總和 - 最大幾次總和 = 剩下來用 brick 補足的 <= 既有 brick 數


求最大幾次(用 ladder 的),可以用 minHeap 來完成,概念可參考 215. Kth Largest Element in an Array 這題。

程式碼
class Solution {
public:
    int furthestBuilding(vector<int>& heights, int bricks, int ladders) {
        long long int sum = 0, topKSum = 0, i;
        priority_queue<int, vector<int>, greater<int>> pq;

        for(i=1; i<heights.size(); i++)
        {
            int diff = heights[i] - heights[i-1];
            if(diff > 0)
            {
                sum += diff;
                topKSum += diff;
                pq.push(diff);
                if(pq.size() > ladders)
                {
                    topKSum -= pq.top();
                    pq.pop();
                }
                if(sum - topKSum > bricks)
                    break;
            }
        }
        return i-1;
    }
};

2024年2月7日 星期三

C++ 刷題常用模板

 1. 比較

bool cmp(<T>& a, <T>& b)

{

    return a > b;

}

sort(v.begin(), v.end(), cmp());


Note: cmp 裡面 return 的順序等於希望最終長怎樣的條件

2024年1月18日 星期四

2024年1月8日 星期一

Bad permissions. Try removing permissions for user......

問題:

用 ssh 連線時出現錯誤訊息:Bad permissions. Try removing permissions for user: LAPTOP-xxxxxx\\user2 on file C:/Users/user1/.ssh/config.


(user1 是我自己在用的帳號,user2 是電腦中存在但不是正在用的帳號。)


解決方法:

先進到 C:\Users\user1\.ssh

對 .ssh 右鍵 > 關閉繼承 > r將繼承的權限轉換物件的明確權限

接著把 user2 移除,最後按 apply


Done!


Ref:

https://stackoverflow.com/questions/49926386/openssh-windows-bad-owner-or-permissions

https://leesonhsu.blogspot.com/2021/03/ssh-key-windows-permission-denied.html

2023年11月8日 星期三

FastAIoT config 欄位

"name": Service 名稱

"central_broker" & "app_broker": port 與 IP 的設定

    "host"

    "management_port"

    "connection_port"

"input": DataName_DataFormat (ex: png_image, acc_nparray)

"output": DataName_DataFormat (ex: fall_text, speech_text)

"topology": "source" & "destination" pair

    "type": "input", "output", "server"

    "queue"

            if type == "input" or "output": DataName_DataFormat 

            else: ModelName_<input/output>_DataFormat

            (ex: "HumanDetector_output_image", "FallDetectorLSTM_input_nparray")

(ex:

    "source":

            "type": "input",

            "queue": "png_image"      

     "destination":

             "type": "server",

            "queue": "FallDetectorGCN_input_image")

FastAIoT 平台,自建 service

 不用 central broker 版本:

import pika, sys, os

import numpy as np

import cv2

import torch

from torch import nn

import torchvision


os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"


# load the COCO dataset category names

# we will use the same list for this notebook

COCO_INSTANCE_CATEGORY_NAMES = [

    '__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',

    'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign',

    'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',

    'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A',

    'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',

    'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket',

    'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',

    'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',

    'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table',

    'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',

    'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book',

    'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'

]


model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)

model.eval()

class PersonDetector():

    def __init__(self) -> None:

        self.connection = pika.BlockingConnection(pika.ConnectionParameters(host='192.168.56.1', port=5672))

        self.channel = self.connection.channel()


        self.channel.exchange_declare(exchange="PersonDetector", exchange_type="topic", auto_delete=True, arguments={"output":["PersonDetector_output_text"]})

        self.channel.queue_declare(queue='PersonDetector_input_image', exclusive=True)

        self.channel.queue_bind(queue="PersonDetector_input_image", exchange="PersonDetector", routing_key=f"*.*.*.image")

        

        # load model

        print("Start loading model")

        #model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)

        #model.eval()

        print("Load model successfully")


    

    def __callback(self, ch, method, properties, body):

        if "PersonDetector" in method.routing_key:

            pass

        else:

            routing_key_tokens = method.routing_key.split(".")

            app_name = routing_key_tokens[0]

            client_id = routing_key_tokens[1]

            

            # preprocessing

            img_bytes = np.frombuffer(body, dtype=np.uint8)

            img = cv2.imdecode(img_bytes, 1)

            cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

            img = img.astype(np.float32) / 255.0

            img = torch.tensor(img)

            img = img.permute(2, 0, 1)

            


            # detect

            pred = model([img])

            pred_class = [COCO_INSTANCE_CATEGORY_NAMES[i] for i in list(pred[0]['labels'].numpy())]

            pred_boxes = [[(i[0], i[1]), (i[2], i[3])] for i in list(pred[0]['boxes'].detach().numpy())]

            pred_score = list(pred[0]['scores'].detach().numpy())

            pred_t = [pred_score.index(x) for x in pred_score if x>0.7][-1]

            pred_boxes = pred_boxes[:pred_t+1]

            pred_class = pred_class[:pred_t+1]

            for cls in pred_class:

                if cls == "person":

                    print(f"person detect!!!")


                

    def run(self):

        self.channel.basic_consume(queue='PersonDetector_input_image', on_message_callback=self.__callback, auto_ack=True)


        print(' [*] Waiting for messages. To exit press CTRL+C')

        self.channel.start_consuming()

        

        

if __name__ == '__main__':

    try:

        detector = PersonDetector()

        detector.run()

    except KeyboardInterrupt:

        print('Interrupted')

        try:

            sys.exit(0)

        except SystemExit:

            os._exit(0)


==========

import pika

import pandas as pd

import numpy as np


connection = pika.BlockingConnection(pika.ConnectionParameters(host='192.168.56.1', port=5672))

channel = connection.channel()



with open(f"C:\\Users\\Cherry\\Downloads\\image.jpg", "rb") as f:

    data = f.read()

channel.basic_publish(exchange='PersonDetector',

                        routing_key=f'PersonDetection.client0.null.image',

                        body=data)

connection.close()

print("finish")

2023年9月26日 星期二

啟動 FastAIoT 平台

Conda

啟動 conda 環境: conda activate aiot


Central broker

打開 docker console ,找到 rabbitmq 點選 start

回到程式資料夾底下,啟動 central broker


Config 

刪除或新增"APP"的UI介面,上傳 config

啟動:

python main.py

上傳:

POST -> try it out -> 選擇檔案 -> execute

查看 Responses -> Code,如果有成功會寫: Build App successfully



Server

到 application 的資料夾底下,啟動 server

python server.py


Client

到 application 的資料夾底下,用 client.py 送資料

python client.py --audio <filename>.wav



Build container from a dockerfile

docker build -t <name> .

docker run --rm <name>

2023年9月15日 星期五

Multi Camera Multi Target Python* Demo (openvino) (fail)

git clone https://github.com/openvinotoolkit/open_model_zoo.git

cd open_model_zoo\demos

cd multi_camera_multi_target_tracking_demo\python

conda create --name openvino python=3.7

conda activate openvino

pip install openvino

cd ..\..

pip install -r requirements.txt

cd multi_camera_multi_target_tracking_demo\python

pip install openvino-dev

pip install --upgrade pip

pip install .

omz_downloader --list models.lst

omz_converter --list models.lst

python multi_camera_multi_target_tracking_demo.py -i "\cam1.mp4" "\cam4.mp4" --m_detector "\open_model_zoo\demos\multi_camera_multi_target_tracking_demo\python\intel\person-detection-retail-0013\FP16\person-detection-retail-0013.xml" --m_reid "\open_model_zoo\demos\multi_camera_multi_target_tracking_demo\python\intel\person-reidentification-retail-0277\FP16\person-reidentification-retail-0277.xml" --config configs\person.py --output_video outout.avi


有執行結果,但完全不準 = W =


ref:

https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/multi_camera_multi_target_tracking_demo/python

安裝 Torchreid (fail)

git clone https://github.com/KaiyangZhou/deep-person-reid.git

cd deep-person-reid/

conda create --name torchreid python=3.7

conda activate torchreid

pip install -r requirements.txt

pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113

python setup.py develop

python scripts/main.py --config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad_cosine.yaml --transforms random_flip random_erase --root ./dataset data.save_dir log/osnet_x1_0_dukemtmcreid_softmax_cosinelr # train不起來




dataset: 
https://drive.google.com/file/d/0B8-rUzbwVRk0c054eEozWG9COHM/view

安裝 CMU Object Detection & Tracking for Surveillance Video Activity Detection (fail)

# method 1

conda create --name cmuOD python=3.7

conda activate cmuOD

pip install -r requirements.txt

conda install tensorflow-gpu==1.15

python obj_detect_tracking.py --model_path obj_v3_model --version 3 --video_dir v1-val_testvideos --video_lst_file v1-val_testvideos.lst --frame_gap 1 --get_tracking --tracking_dir test_track_out # can't successfully execute yet

(ref: 最簡單的python Tensorflow-gpu安裝方法)


# method 2

conda create --name cmuOD python=3.7  

conda activate cmuOD

conda install -c conda-forge cudatoolkit=10.0 cudnn=7.6.5

pip install -r requirements.txt

pip install tensorflow-gpu==1.15




# test tensorflow with GPU

from tensorflow.python.client import device_lib

print(device_lib.list_local_devices())


# requirements.txt

numpy==1.19

scipy

scikit-learn

opencv-python

matplotlib

pycocotools

tqdm

protobuf==3.20.*

psutil

pyyaml