Skip to content

Commit

Permalink
base
Browse files Browse the repository at this point in the history
  • Loading branch information
YiLunLee committed Dec 13, 2023
1 parent a36f6a5 commit 1054731
Show file tree
Hide file tree
Showing 17 changed files with 570 additions and 0 deletions.
177 changes: 177 additions & 0 deletions index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
<!DOCTYPE html>
<!-- modified from url=https://fuenwang.ml/project/led2net/ -->
<!-- <html lang="en" class="gr__ee_nycu_edu"> -->
<html lang="en">

<head>
<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="author" content="ginawu">
<script
src="https://code.jquery.com/jquery-3.4.1.js"
integrity="sha256-WpOohJOqMqqyKL9FccASB9O0KwACQJpFTUBLTYOVvVU="
crossorigin="anonymous">
</script>

<title>Learning Face Recognition Unsupervisedly by Disentanglement and Self-Augmentation</title>

<!-- CSS includes -->
<link href="static/asset/bootstrap.min.css" rel="stylesheet">
<link href="static/asset/css" rel="stylesheet" type="text/css">
<link href="static/asset/mystyle.css" rel="stylesheet">
<link href="static/asset/fig_style.css" rel="stylesheet">

<style type="text/css">
.navbar-center {
display: inline-block;
float: none;
vertical-align: top;
}
</style>


</head>

<!-- <body data-gr-c-s-loaded="true"> -->
<body>

<!-- <div class="topnav" id="myTopnav">
<a href="#header">Home</a>
<a href="#abstract">Abstract</a>
<a href="#demo">Demo</a>
<a href="#paper">Paper</a>
<a href="#code">Code</a>
<a href="javascript:void(0);" class="icon" onclick="toggleTopNav()">&#9776;</a>
</div> -->


<div id="header" class="container-fluid">
<div class="row">
<h1>Learning Face Recognition Unsupervisedly by Disentanglement and Self-Augmentation</h1>
<div class="authors">
<a href="https://yilunlee.github.io/" target="_blank">Yi-Lun Lee</a>,
<a href="mailto:piews482zt@gmail.com" target="_blank">Min-Yuan Tseng</a>,
<a href="" target="_blank">Yu-Cheng Luo</a>,
<a href="" target="_blank">Dung-Ru Yu</a>,
<a href="https://walonchiu.github.io/" target="_blank">Wei-Chen Chiu</a>
<!-- <center>(* denotes equal contribution)</center> -->

<p style="text-align:center;">
National Chiao Tung University, Taiwan
<!-- <a href="http://nthu-en.web.nthu.edu.tw/bin/home.php" target="_blank"><img src="./ACCV2018/nthu_logo.png" height="150"></a> -->
<!-- &emsp; -->
</p>
</div>
</div>
</div>

<div class="container" id="links">
<center>
<div class="mx-auto">
<ul class="nav navbar-center">
<li class="nav-item text-center" style="display: inline-block;">
<a href="https://people.cs.nycu.edu.tw/~walon/publications/lee2020icra.pdf" class="nav-link">
<svg style="width:50px;height:50px;" viewBox="0 0 16 16">
<path fill="currentColor" d="M14 4.5V14a2 2 0 0 1-2 2H4a2 2 0 0 1-2-2V2a2 2 0 0 1 2-2h5.5L14 4.5zm-3 0A1.5 1.5 0 0 1 9.5 3V1H4a1 1 0 0 0-1 1v12a1 1 0 0 0 1 1h8a1 1 0 0 0 1-1V4.5h-2z"/>
<path fill="currentColor" d="M4.5 12.5A.5.5 0 0 1 5 12h3a.5.5 0 0 1 0 1H5a.5.5 0 0 1-.5-.5zm0-2A.5.5 0 0 1 5 10h6a.5.5 0 0 1 0 1H5a.5.5 0 0 1-.5-.5zm1.639-3.708 1.33.886 1.854-1.855a.25.25 0 0 1 .289-.047l1.888.974V8.5a.5.5 0 0 1-.5.5H5a.5.5 0 0 1-.5-.5V8s1.54-1.274 1.639-1.208zM6.25 6a.75.75 0 1 0 0-1.5.75.75 0 0 0 0 1.5z"/>
</svg><br>
Paper
</a>
<li class="nav-item text-center" style="display: inline-block;">
<a href="https://github.com/YiLunLee/unsupervised-face-recognition" class="nav-link">
<svg style="width:50px;height:50px" viewBox="0 0 16 16">
<path fill="currentColor" d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59.4.07.55-.17.55-.38 0-.19-.01-.82-.01-1.49-2.01.37-2.53-.49-2.69-.94-.09-.23-.48-.94-.82-1.13-.28-.15-.68-.52-.01-.53.63-.01 1.08.58 1.23.82.72 1.21 1.87.87 2.33.66.07-.52.28-.87.51-1.07-1.78-.2-3.64-.89-3.64-3.95 0-.87.31-1.59.82-2.15-.08-.2-.36-1.02.08-2.12 0 0 .67-.21 2.2.82.64-.18 1.32-.27 2-.27.68 0 1.36.09 2 .27 1.53-1.04 2.2-.82 2.2-.82.44 1.1.16 1.92.08 2.12.51.56.82 1.27.82 2.15 0 3.07-1.87 3.75-3.65 3.95.29.25.54.73.54 1.48 0 1.07-.01 1.93-.01 2.2 0 .21.15.46.55.38A8.012 8.012 0 0 0 16 8c0-4.42-3.58-8-8-8z">
</svg><br>
Code
</a>
</ul>
</div>
</center>
</div>

<div class="container" id="abstract">
<!-- <img src="static/fig/teaser.jpeg" height="50%" width="100%"> -->


<h2>Abstract</h2>
<p style="text-align: justify;">
As the growth of smart home, healthcare, and home robot applications, learning a face recognition system which is specific for a particular environment and capable of self-adapting to the temporal changes in appearance (e.g., caused by illumination or camera position) is nowadays an important topic. In this paper, given a video of a group of people, which simulates the surveillance video in a smart home environment, we propose a novel approach which unsuper- visedly learns a face recognition model based on two main components: (1) a triplet network that extracts identity-aware feature from face images for performing face recognition by clustering, and (2) an augmentation network that is conditioned on the identity-aware features and aims at synthesizing more face samples. Particularly, the training data for the triplet network is obtained by using the spatiotemporal characteristic of face samples within a video, while the augmentation network learns to disentangle a face image into identity-aware and identity-irrelevant features thus is able to generate new faces of the same identity but with variance in appearance. With taking the richer training data produced by augmentation network, the triplet network is further fine-tuned and achieves better performance in face recognition. Extensive experiments not only show the efficacy of our model in learning an environment- specific face recognition model unsupervisedly, but also verify its adaptability to various appearance changes.
</p>

</div>

<div class="container" id="experiment">
<h2>Method</h2>
<img src="static/fig/model.jpeg" width="100%">
</div>

<div class="container" id="experiment">
<h2>Quantitative Results</h2>
<img src="static/fig/quantitative_results.png" width="100%">

<h2>Ablation Studies</h2>
<center><h5>Same group but with dramatic appearance changes</h5></center>
<img src="static/fig/diff_appearance.png" width="100%">
<center><p><br></p></center>
<center><h5>Same group but with dramatic illumination changes</h5></center>
<img src="static/fig/daynight.png" width="100%">
<center><p><br></p></center>
<center><h5>Adding a new group</h5></center>
<img src="static/fig/add_group.png" width="100%">


<h2>Visualization</h2>
<center><img src="static/fig/visualization.png" width="80%"></center>

<!-- <h3>Generation</h3>
<img src="static/fig/sample.jpg" width="100%">
<center><p>Qualitative examples of the point clouds generated by our proposed recursive point cloud generator (RPG).</p></center>
<h3>Interpolation</h3>
<img src="static/fig/interpolation.jpg" width="100%">
<center><p>Examples for our interpolation between different shapes: (a) Rows sequentially show the point clouds generated on all the expansion stages while interpolating between the chairs on the bottom-left and bottom-right corners; (b) Each row shows interpolation between two 3D shapes of the same object category; (c) Each row shows interpolation between two shapes from different categories.</p></center>
<h3>Co-segmentation</h3>
<img src="static/fig/co-segmentation.jpg" width="100%">
<center><p>Visualization of co-segmentation results among object instances from Car, Chair and Airplane categories in ShapeNet. For each object category, the rows sequentially highlight different common parts with green color shared across the instances.</p></center>
-->

</div>



<!-- <div class="container" id="paper">
<h3>Citation</h3>
<div class="alert alert-secondary" role="alert">
<pre>@misc{ko2021rpg,
title={RPG: Learning Recursive Point Cloud Generation},
author={Wei-Jan Ko and Hui-Yu Huang and Yu-Liang Kuo and Chen-Yi Chiu and Li-Heng Wang and Wei-Chen Chiu},
year={2021},
eprint={2105.14322},
archivePrefix={arXiv},
primaryClass={cs.CV}}</pre>
</div>
</div> -->

<br>

<!-- Javascript includes -->
<!--
<script src="static/asset/jquery-1.8.3.min.js"></script>
<script src="static/asset/mystyle.js"></script>
<script src="static/asset/bootstrap.min.js"></script>
<script async="" src="static/asset/analytics.js"></script><script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-98479202-1', 'auto');
ga('send', 'pageview');
</script>
<div id="point-jawn" style="z-index: 2147483647;"></div></body></html>
-->

Binary file added static/.DS_Store
Binary file not shown.
Loading

0 comments on commit 1054731

Please sign in to comment.