0%

安装 Git

以下内容参考自廖雪峰官方网站的Git教程

Windows 上,从Git官网下载 Git 并安装。

安装完成后,还需要最后一步设置,在 Git Bash 命令行输入:

$ git config --global user.name "Your Name"
$ git config --global user.email "email@example.com"

使用 SSH 连接到 GitHub

以下内容参考自GitHub官网教程

检查现有 SSH 密钥

在 Git Bash 中输入如下命令以查看是否存在现有SSH密钥:

$ ls -al ~/.ssh

检查目录列表以查看是否已经有 SSH 公钥。 默认情况下,公钥的文件名是以下之一:

- id_rsa.pub
- id_ecdsa.pub
- id_ed25519.pub

生成新 SSH 密钥

生成新 SSH 密钥,在 Git Bash 中粘贴下面的文本(替换为您的GitHub电子邮件地址)

$ ssh-keygen -t ed25519 -C "your_email@example.com"

提示您 “Enter a file in which to save the key(输入要保存密钥的文件)” 时,按 Enter 键。 这将接受默认文件位置。

> Enter a file in which to save the key (/c/Users/you/.ssh/id_ed25519):[Press enter]

在提示时输入安全密码。 For more information, see “Working with SSH key passphrases.”

> Enter passphrase (empty for no passphrase): [Type a passphrase]
> Enter same passphrase again: [Type passphrase again]

将 SSH 密钥添加到 ssh-agent

手动启动 ssh-agent

$ ssh-agent bash

在无提升权限的终端窗口中,将 SSH 私钥添加到 ssh-agent

$ ssh-add C:/Users/YOU/.ssh/id_ed25519

$ ssh-add C:\\Users\\YOU\\.ssh\\id_ed25519

新增 SSH 密钥到 GitHub 账户

如果您的 SSH 公钥文件与示例代码不同,请修改文件名以匹配您当前的设置。 在复制密钥时,请勿添加任何新行或空格。

$ clip < ~/.ssh/id_ed25519.pub

提示:如果clip不可用,可找到隐藏的.ssh文件夹(C:\Users\xw.ssh),在常用的文本编辑器中打开该文件,并将其复制到剪贴板。

  1. 在GitHub任何页面右上角,单击个人资料照片,然后单击“Settings”。
  2. 在边栏的“Access”部分中,单击“SSH and GPG keys”。
  3. 单击“New SSH key”。
  4. 在“Title”字段中为新密钥添加描述性标签。 例如,如果使用的是个人笔记本电脑,则可以将此密钥称为“个人笔记本电脑”。
  5. 在“Key”字段中,粘贴公钥。
  6. 单击“Add SSH key”。

测试 SSH 连接

$ ssh -T git@github.com

验证生成的消息包含您的用户名,则连接成功了。

> Hi USERNAME! You've successfully authenticated, but GitHub does not provide shell access.

安装 Node.js

Node.js官网下载并安装。默认设置即可,无需勾选 “自动安装必要工具”。

检查 “C:\Program Files\nodejs\” 是否已添加到 Windows 环境变量的系统变量中。

安装全局 Hexo

在 GitHub 新建一个私有仓库,例如 hexo,将该仓库 git clone 克隆到本地。打开本地 hexo 文件夹,在空白处右键选择Git Bash Here,依次输入以下指令安装 hexo 和 hexo 服务器模块(-g表示全局安装):

$ npm install -g hexo-cli
$ npm install hexo-server --save

执行完成后,hexo文件夹内会生成子文件夹node_modules、文件package.json和package-lock.json

初始化博客:

$ hexo init blog

执行完成后,hexo文件夹内会多出子文件夹blog,git bash进入该文件夹

$ cd blog

在blog文件夹内,测试 hexo 是否安装成功(g 表示生成,s 表示在本地运行 hexo 服务器,这两条命令等价于命令 hexo generate 和 hexo server):

$ hexo g
$ hexo s

成功后会有提示,进入 http://localhost:4000/ 即可看到刚刚生成的网页。按 Ctrl + C 停止运行。

将 hexo 部署到 GitHub

在 GitHub 创建仓库

在 GitHub 新建一个公共仓库,Repository name(仓库名)必须是“你的 GitHub 用户名.github.io”,Description(描述)和 Initialize this repository with: add a README file(初始化仓库时添加一个说明文件)均为可选项。

修改 hexo 配置文件

在 blog 文件夹下找到 _config.yml 文件,这是你的 hexo 博客的配置文件,用 sublime 或 VSCode 或 记事本打开,找到下面两处设置的位置,将其按如下的内容修改,username 换成你的 GitHub 用户名。

# Deployment
## Docs: https://hexo.io/docs/one-command-deployment
deploy:
type: git
repo: https://github.com/username/username.github.io.git
branch: master

另一处,

# URL
## Set your site url here. For example, if you use GitHub Page, set url as 'https://username.github.io/project'
url: https://username.github.io/

安装 Git 部署命令工具

npm install hexo-deployer-git --save

然后执行下面三行命令后,就可以在 username.github.io 访问你的博客啦!

$ hexo clean
$ hexo g
$ hexo d

常用命令

Git 常用命令

$ git add -A
$ git commit -m "git tracks changes"
$ git push origin master

Hexo 常用命令

$ hexo new "pagename" 
$ hexo new page "pagename"
$ hexo clean
$ hexo g
$ hexo d
$ hexo s

其中,hexo new “pagename” 表示在 source/_post/ 下新建页面,hexo d 表示 push 到服务器。

Hexo 个性化

用 sublime 或 VSCode 或 记事本打开 blog 文件夹你的博客的配置文件 _config.yml,可以设置博客标题、作者姓名、语言等等

安装 NexT

注意:NexT原官网对应的Next原仓库已不再维护,社区维护版本是Next社区维护版本

克隆Next社区维护版本到blog目录下的 themes/next

$ git clone https://github.com/theme-next/hexo-theme-next themes/next

Hexo 根配置文件 中设置主题:hexo/blog/_config.yml

theme: next

Next 个性化

在 next 文件夹下的 _config.yml 中设置 home 和 archives 路径,以及主题(本博客使用的是 Gemini 主题)。

参考资料

[1] 廖雪峰官方网站的Git教程

[2] 使用SSH连接到GitHub

[3] Hexo官网

[4] Next社区维护版本

全文链接:
http://openaccess.thecvf.com/content_ICCV_2019/html/Chen_Self-Critical_Attention_Learning_for_Person_Re-Identification_ICCV_2019_paper.html

Introduction

  • Most attention modules are usually trained in a weakly-supervised manner with the final objective, for example, the supervision from the triple loss or classification loss in the person ReID task.

    1. As the supervision is not specifically designed for the attention module, the gradients from this weak supervisory signal might be vanishing in the back propagation process.

    2. The attention maps learned in such manner are not always “transparent” in their meaning, and lack discrimination ability and robustness.

    3. The redundant and misleading attention maps are hardly corrected without direct and appropriate supervisory signal.

    4. The quality of the attention during training process can only be evaluated qualitatively by the human end-users, examining the attention map one by one, which is labor-intensive and inefficient.

  1. We learn the attention with a critic which measures the attention quality and provides a powerful supervisory signal to guide the learning process.

  2. Since most effective evaluation indicators are usually non-differentiable, e.g. the gain of attention model over the basic network, we jointly train our attention agent and critic in a reinforcement learning manner, where the agent produces the visual attention while the critic analyzes the gain from the attention and guides the agent to maximize this gain.

  3. We design spatial- and channel-wise attention models with our critic module.

Read more »

全文链接:
http://openaccess.thecvf.com/content_ICCV_2019/html/Liu_Deep_Reinforcement_Active_Learning_for_Human-in-the-Loop_Person_Re-Identification_ICCV_2019_paper.html

Introduction

Most existing supervised person Re-ID approaches employ a train-once-and-deploy scheme, i.e., a large amount of pre-labelled data is put into training phrase all at once.

However, in practice this assumption is not easy to adapt:

  1. Pairwise pedestrian data is prohibitive to be collected since it is unlikely that a large amount of pedestrian may reappear in other camera views.

  2. The increasing number of camera views amplifies the difficulties in searching the same person among multiple camera views.

Solutions:

  1. Unsupervised learning algorithms

    Unsupervised learning based Re-ID models are inherently weaker compared to supervised learning based models, compromising Re-ID effectiveness in any practical deployment.

  2. Semi-supervised learning scheme

    These models are still based on a strong assumption that parts of the identities (e.g. one third of the training set) are fully labelled for every camera view.

-> Reinforcement Learning + Active Learning:

human-in-the-loop (HITL) model learning process [1]

A step-by-step sequential active learning process is adopted by exploring human selective annotations on a much smaller pool of samples for model learning.

These cumulatively labelled data by human binary verification are used to update model training for improving Re-ID performance.

Such an approach to model learning is naturally suited for reinforcement learning together with active learning, the focus of this work.

Read more »

全文链接:
http://openaccess.thecvf.com/content_ICCV_2019/html/Sun_MVP_Matching_A_Maximum-Value_Perfect_Matching_for_Mining_Hard_Samples_ICCV_2019_paper.html

The source code is here.

Introduction

Hard Samples

  1. The appearance of different pedestrians may be highly similar;

  2. The pose of a person may vary significantly as time and space changed;

  3. The light conditions taken by some cameras are sometimes poor.

These hard samples would strongly slow down the convergence rate of the metric learning, which works on pulling similar samples to cluster together while pushing dissimilar ones to widen apart.

Or worse of all, the learned embedding metric and feature representation could be heavily biased by these hard samples.

Read more »

全文链接:
http://openaccess.thecvf.com/content_ICCV_2019/html/Luo_Spectral_Feature_Transformation_for_Person_Re-Identification_ICCV_2019_paper.html

The source code is here.

Introduction

  • The two most prevalent types of loss functions in ReID are classification loss (e.g. softmax cross entropy loss) and metric learning based loss (e.g. triplet loss and contrastive loss):

    1. Classification loss has promising convergence but is vulnerable to overfitting. It processes samples individually and only builds connections implicitly through the classifier.

    2. Metric learning based loss explicitly optimizes the distances between samples. While the similarity structure it builds only involves a pair/triplet of data points and ignores other informative samples. This leads to a large proportion of trivial pairs/triplets which could overwhelm the training process and eventually makes the model suffer from slow convergence.

  • Most existing methods process data points individually or only involves a fraction of samples while building a similarity structure. They ignore dense informative connections among samples more or less. The lack of holistic observation eventually leads to inferior performance. To relieve the issue, we propose to formulate the whole data batch as a similarity graph.

Read more »

全文链接:
http://openaccess.thecvf.com/content_ICCV_2019/html/Yu_Robust_Person_Re-Identification_by_Modelling_Feature_Uncertainty_ICCV_2019_paper.html

The source code is here.

Challenges

  • Two types of noise are prevalent in practice:

    1. label noise caused by human annotator errors, i.e., people assigned with the wrong identities

    2. data outliers caused by person detector errors or occlusion

  • Having both types of noisy samples in a training set inevitably has a detrimental effect on the learned feature embedding:

Noisy samples are often far from inliers of the same class in the input (image) space.

To minimise intra-class distance and pull the noisy samples close to their class centre, a ReID model often needs to sacrifice inter-class separability, leading to performance degradation.

Read more »

全文链接:
https://www.semanticscholar.org/paper/Batch-DropBlock-Network-for-Person-and-Beyond-Dai-Chen/2e4e3d80e0a789dcf45e61401c8af4e3fa96dfea

Challenges

  • the large variation of poses, background, illumination, camera conditions and view angle changes

  • Because the body parts such as faces, hands and feet are unstable as the view angle changes, the CNN tends to focus on the main body part and the other descriminative body parts, i.e., some attentive local features are consequently suppressed.

Related Work

  1. Pose-based works seek to localize different body parts and align their associated features.

  2. Part-based works use coarse partitions or attention selection network to improve feature learning.

Motivation

  1. Pose-based networks usually require additional body pose or segment information.

  2. These networks are designed using specific partition mechanisms, such as a horizontal partition, which is fit for person re-ID but hard to be generalized to other metric learning tasks.

Read more »

全文链接:
http://openaccess.thecvf.com/content_ICCV_2019/html/Wu_Unsupervised_Graph_Association_for_Person_Re-Identification_ICCV_2019_paper.html

The source code is here.

Introduction

Chanllenge One

Since in supervised learning deep CNN is a data-driven method, it requires a large number of pair-wise labelled data in training to learn view-invariant representations. However, labelling sufficient pairwise RE-ID data is expensive and time-consuming. How to improve the performance and scalability of deep RE-ID algorithm without pair-wise labelled data (i.e., unsupervised learning) is a great challenge in recent person RE-ID research.

There have been a series of unsupervised image based methods to address this problem, which can be roughly divided into three categories:

  1. image-to-image translation

    transfer the source domain images to the target domain by GAN network

  2. domain adaptation

    transfer the source domain trained model to the target domain in an unsupervised manner

  3. unsupervised clustering

    obtain the pseudo labels of target domain data through the unsupervised clustering algorithms and fine tune the source domain model with pseudo labels on target domain.

Chanllenge Two

The precondition of above mentioned methods is that there are some similarities between the source domain and the target domain.

Read more »

Beyond Human Parts: Dual Part-Aligned Representations for Person Re-Identification

全文链接:
http://openaccess.thecvf.com/content_ICCV_2019/html/Guo_Beyond_Human_Parts_Dual_Part-Aligned_Representations_for_Person_Re-Identification_ICCV_2019_paper.html

The source code is here.

Challenges - Misalignment Problem

The significant visual appearance changes caused by:

  1. human pose variation

  2. lighting conditions

  3. part occlusions

  4. background cluttering

  5. distinct camera viewpoints ……

  1. Hand-crafted partitioning

    relies on manually designed splits of the input image or the feature maps into grid cells or horizontal stripes, based on the assumption that the human parts are well-aligned in the RGB color space

  2. The attention mechanism

    tries to learn an attention map over the last output feature map and constructs the aligned part features accordingly

  3. Predicting a set of predefined attributes as useful features to guide the matching process.

  4. Injecting human pose estimation or human parsing results to extract the human part aligned features based on the predicted human key points or semantic human part regions, while the success of such approaches heavily counts on the accuracy of human parsing models or pose estimators.

Motivation

Most of the previous studies mainly focus on learning more accurate human part representations, while neglecting the influence of potentially useful contextual cues that could be addressed as “non-human” parts.

Beyond these predefined part categories, there still exist many objects or parts which could be critical for person re-identification, but tend to be recognized as background by the pre-trained human parsing models.

Read more »

全文链接:
http://openaccess.thecvf.com/content_ICCV_2019/html/Fu_Self-Similarity_Grouping_A_Simple_Unsupervised_Cross_Domain_Adaptation_Approach_for_ICCV_2019_paper.html

The source code is here.

Challenges

  1. Deep re-ID models trained on the source domain may have a significant performance drop on the target domain due to the data-bias existing between source and target datasets.

    -> unsupervised domain adaptation (UDA)

    -> generative adversarial network (GAN)

  2. The disparities of cameras are another critical factor influencing re-ID performance.

    -> Hetero-Homogeneous Learning (HHL [1])

However, the performances of these UDA approaches are still far behind their fully-supervised counterparts. The main reason is that most previous works focus on increasing the training samples or comparing the similarity or dissimilarity between the source dataset and the target dataset but ignoring the similar natural characteristics existing in the training samples from the target domain.

Read more »