Spaces:
Running
Running
relation section
Browse files- index.html +10 -46
index.html
CHANGED
|
@@ -201,60 +201,25 @@
|
|
| 201 |
plug-and-play model, BEYOND can easily cooperate with the Adversarial Trained Classifier (ATC), achieving state-of-the-art
|
| 202 |
(SOTA) robustness accuracy. Experimental results show that BEYOND outperforms baselines by a large margin, especially under
|
| 203 |
adaptive attacks. Empowered by the robust relationship built on SSL, we found that BEYOND outperforms baselines in terms
|
| 204 |
-
of both detection ability and speed
|
| 205 |
</p>
|
| 206 |
-
<!-- <p>
|
| 207 |
-
We present the first method capable of photorealistically reconstructing a non-rigidly
|
| 208 |
-
deforming scene using photos/videos captured casually from mobile phones.
|
| 209 |
-
</p>
|
| 210 |
-
<p>
|
| 211 |
-
Our approach augments neural radiance fields
|
| 212 |
-
(NeRF) by optimizing an
|
| 213 |
-
additional continuous volumetric deformation field that warps each observed point into a
|
| 214 |
-
canonical 5D NeRF.
|
| 215 |
-
We observe that these NeRF-like deformation fields are prone to local minima, and
|
| 216 |
-
propose a coarse-to-fine optimization method for coordinate-based models that allows for
|
| 217 |
-
more robust optimization.
|
| 218 |
-
By adapting principles from geometry processing and physical simulation to NeRF-like
|
| 219 |
-
models, we propose an elastic regularization of the deformation field that further
|
| 220 |
-
improves robustness.
|
| 221 |
-
</p>
|
| 222 |
-
<p>
|
| 223 |
-
We show that <span class="dnerf">Nerfies</span> can turn casually captured selfie
|
| 224 |
-
photos/videos into deformable NeRF
|
| 225 |
-
models that allow for photorealistic renderings of the subject from arbitrary
|
| 226 |
-
viewpoints, which we dub <i>"nerfies"</i>. We evaluate our method by collecting data
|
| 227 |
-
using a
|
| 228 |
-
rig with two mobile phones that take time-synchronized photos, yielding train/validation
|
| 229 |
-
images of the same pose at different viewpoints. We show that our method faithfully
|
| 230 |
-
reconstructs non-rigidly deforming scenes and reproduces unseen views with high
|
| 231 |
-
fidelity.
|
| 232 |
-
</p> -->
|
| 233 |
</div>
|
| 234 |
</div>
|
| 235 |
</div>
|
| 236 |
<!--/ Abstract. -->
|
| 237 |
-
|
| 238 |
-
<!-- Paper video. -->
|
| 239 |
-
<!-- <div class="columns is-centered has-text-centered">
|
| 240 |
-
<div class="column is-four-fifths">
|
| 241 |
-
<h2 class="title is-3">Video</h2>
|
| 242 |
-
<div class="publication-video">
|
| 243 |
-
<iframe src="https://www.youtube.com/embed/MrKrnHhk8IA?rel=0&showinfo=0"
|
| 244 |
-
frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
|
| 245 |
-
</div>
|
| 246 |
-
</div>
|
| 247 |
-
</div> -->
|
| 248 |
-
<!--/ Paper video. -->
|
| 249 |
</div>
|
| 250 |
</section>
|
| 251 |
|
| 252 |
<section class="section">
|
| 253 |
<div class="container is-max-desktop">
|
| 254 |
-
<h2 class="title is-3">
|
| 255 |
<div class="columns is-centered">
|
| 256 |
-
<div class="column
|
| 257 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 258 |
</div>
|
| 259 |
</div>
|
| 260 |
</div>
|
|
@@ -265,9 +230,8 @@
|
|
| 265 |
<h2 class="title is-3">Method Overview of BEYOND</h2>
|
| 266 |
<div class="columns is-centered">
|
| 267 |
<div class="column container-centered">
|
| 268 |
-
<img src="./static/images/overview.png"
|
| 269 |
-
|
| 270 |
-
<p><strong>Figure 1.</strong> Overview of <strong>BEYOND</strong>. First, we augment the input image to obtain a bunch of its neighbors. Then, we
|
| 271 |
perform the label consistency detection mechanism on the classifier’s prediction of the input image and that of neighbors predicted by
|
| 272 |
SSL’s classification head. Meanwhile, the representation similarity mechanism employs cosine distance to measure the similarity among
|
| 273 |
the input image and its neighbors. Finally, The input image with poor label consistency or representation similarity is flagged as AE.</p>
|
|
|
|
| 201 |
plug-and-play model, BEYOND can easily cooperate with the Adversarial Trained Classifier (ATC), achieving state-of-the-art
|
| 202 |
(SOTA) robustness accuracy. Experimental results show that BEYOND outperforms baselines by a large margin, especially under
|
| 203 |
adaptive attacks. Empowered by the robust relationship built on SSL, we found that BEYOND outperforms baselines in terms
|
| 204 |
+
of both detection ability and speed.
|
| 205 |
</p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 206 |
</div>
|
| 207 |
</div>
|
| 208 |
</div>
|
| 209 |
<!--/ Abstract. -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 210 |
</div>
|
| 211 |
</section>
|
| 212 |
|
| 213 |
<section class="section">
|
| 214 |
<div class="container is-max-desktop">
|
| 215 |
+
<h2 class="title is-3">Neighborhood Relations of Benign Examples and AEs</h2>
|
| 216 |
<div class="columns is-centered">
|
| 217 |
+
<div class="column container-centered">
|
| 218 |
+
<img src="./static/images/relations.png" alt="Neighborhood Relations of Benign Examples and AEs"/>
|
| 219 |
+
<p><strong>Figure 1. Neighborhood Relations of Benign Examples and AEs.</strong>. First, we augment the input image to obtain a bunch of its neighbors. Then, we
|
| 220 |
+
perform the label consistency detection mechanism on the classifier’s prediction of the input image and that of neighbors predicted by
|
| 221 |
+
SSL’s classification head. Meanwhile, the representation similarity mechanism employs cosine distance to measure the similarity among
|
| 222 |
+
the input image and its neighbors. Finally, The input image with poor label consistency or representation similarity is flagged as AE.</p>
|
| 223 |
</div>
|
| 224 |
</div>
|
| 225 |
</div>
|
|
|
|
| 230 |
<h2 class="title is-3">Method Overview of BEYOND</h2>
|
| 231 |
<div class="columns is-centered">
|
| 232 |
<div class="column container-centered">
|
| 233 |
+
<img src="./static/images/overview.png" alt="Method Overview of BEYOND"/>
|
| 234 |
+
<p><strong>Figure 2. Overview of BEYOND.</strong>. First, we augment the input image to obtain a bunch of its neighbors. Then, we
|
|
|
|
| 235 |
perform the label consistency detection mechanism on the classifier’s prediction of the input image and that of neighbors predicted by
|
| 236 |
SSL’s classification head. Meanwhile, the representation similarity mechanism employs cosine distance to measure the similarity among
|
| 237 |
the input image and its neighbors. Finally, The input image with poor label consistency or representation similarity is flagged as AE.</p>
|