Sugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (2024)

Sugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (2)

Advanced Search

icse

research-article

Open Access

AST '24: Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024)April 2024Pages 78–87https://doi.org/10.1145/3644032.3644459

Published:10 June 2024Publication HistorySugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (5)

  • 0citation
  • 8
  • Downloads

Metrics

Total Citations0Total Downloads8

Last 12 Months8

Last 6 weeks8

  • eReader
  • PDF

AST '24: Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024)

Sugar-coated poison defense on deepfake face-swapping attacks

Pages 78–87

PreviousChapterNextChapter

Sugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (6)

ABSTRACT

The deployment of deepfake face-swapping technology has matured, becoming widespread on the Internet. The misuse of this technology raises significant concerns for application security and privacy. To counter deepfake threats, we propose a sugar-coated poison defense targeting the latent vectors of generative models. This strategy aims to impact visual effects without substantially increasing reconstruction loss. We establish metrics for visual effects and reconstruction loss to assess perturbation effects on latent vectors, emphasizing those with the most significant impact on visual effects while minimizing reconstruction loss. Our approach begins by utilizing a facial feature extraction model to convert faces into latent representations. We then introduce two latent selection methods: 1) shap-based latent selection using a linear regression model for approximation, and 2) grid search latent selection employing heuristics of adversarial attacks. These methods pinpoint vectors that, when perturbed, can increase face landmark distances while maintaining low mean square errors, commonly used as the optimization metric in deepfake reconstruction models. We apply inconsistent perturbations to selected latent vectors in video frames, acting as sugar-coated poison for deepfake face-swapping applications. Preliminary results demonstrate that these perturbations can be applied to individual videos, resulting in low reconstruction loss. Importantly, they induce measurable consistency reduction in deepfake videos, making them more discernible and accessible to identify.

References

  1. 2022. Dlib library. Retrieved Nov 20, 2023 from http://dlib.net/Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (7)
  2. 2022. Faceswap. Retrieved Dec 4, 2023 from https://github.com/deepfakes/faceswapGoogle ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (8)
  3. 2023. FFmpeg. Retrieved Dec 4, 2023 from https://ffmpeg.org/Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (9)
  4. Shruti Agarwal, Hany Farid, Yuming Gu, Mingming He, Koki Nagano, and Hao Li. 2019. Protecting World Leaders Against Deep Fakes.. In CVPR workshops, Vol. 1. 38.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (10)
  5. Shivangi Aneja, Lev Markhasin, and Matthias Nießner. 2022. TAFIM: Targeted Adversarial Attacks against Facial Image Manipulations. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part XIV. Springer, 58--75.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (11)
  6. Sam Bond-Taylor, Adam Leach, Yang Long, and Chris G Willco*cks. 2021. Deep generative modelling: A comparative review of vaes, gans, normalizing flows, energy-based and autoregressive models. IEEE transactions on pattern analysis and machine intelligence (2021).Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (12)
  7. Ben Pflaum Jikuo Lu Russ Howes Menglin Wang Cristian Canton Ferrer Brian Dolhansky, Joanna Bitton. 2020. The DeepFake Detection Challenge Dataset. arXiv:2006.07397 [cs.CV]Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (13)
  8. Ben Pflaum Nicole Baram Cristian Canton Ferrer Brian Dolhansky, Russ Howes. 2019. The Deepfake Detection Challenge (DFDC) Preview Dataset. arXiv:1910.08854 [cs.CV]Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (14)
  9. Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake VanderPlas, Arnaud Joly, Brian Holt, and Gaël Varoquaux. 2013. API design for machine learning software: experiences from the scikit-learn project. In ECML PKDD Workshop: Languages for Data Mining and Machine Learning. 108--122.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (15)
  10. Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. 2018. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 8789--8797.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (16)Cross Ref
  11. Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. 2020. Stargan v2: Diverse image synthesis for multiple domains. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 8188--8197.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (18)Cross Ref
  12. Umur Aybars Ciftci, Ilke Demir, and Lijun Yin. 2020. Fakecatcher: Detection of synthetic portrait videos using biological signals. IEEE transactions on pattern analysis and machine intelligence (2020).Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (20)
  13. Kenneth T Co, Luis Muñoz-González, Leslie Kanthan, Ben Glocker, and Emil C Lupu. 2021. Universal adversarial robustness of texture and shape-biased models. In 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 799--803.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (21)Cross Ref
  14. Zhenliang He, Wangmeng Zuo, Meina Kan, Shiguang Shan, and Xilin Chen. 2019. Attgan: Facial attribute editing by only changing what you want. IEEE transactions on image processing 28, 11 (2019), 5464--5478.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (23)
  15. Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 4401--4410.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (24)Cross Ref
  16. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2020. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 8110--8119.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (26)Cross Ref
  17. Hoki Kim. 2020. Torchattacks: A pytorch repository for adversarial attacks. arXiv preprint arXiv:2010.01950 (2020).Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (28)
  18. Xinyang Li, Shengchuan Zhang, Jie Hu, Liujuan Cao, Xiaopeng Hong, Xudong Mao, Feiyue Huang, Yongjian Wu, and Rongrong Ji. 2021. Image-to-image translation via hierarchical style disentanglement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8639--8648.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (29)Cross Ref
  19. Yuezun Li, Ming-Ching Chang, and Siwei Lyu. 2018. In ictu oculi: Exposing ai created fake videos by detecting eye blinking. In 2018 IEEE international workshop on information forensics and security (WIFS). IEEE, 1--7.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (31)Cross Ref
  20. Yuezun Li, Xin Yang, Baoyuan Wu, and Siwei Lyu. 2019. Hiding faces in plain sight: Disrupting ai face synthesis with adversarial perturbations. arXiv preprint arXiv:1906.09288 (2019).Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (33)
  21. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (34)
  22. Xiyang Luo, Ruohan Zhan, Huiwen Chang, Feng Yang, and Peyman Milanfar. 2020. Distortion agnostic deep watermarking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 13548--13557.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (35)Cross Ref
  23. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (37)
  24. Francesco Marra, Diego Gragnaniello, Davide Cozzolino, and Luisa Verdoliva. 2018. Detection of gan-generated fake images over social networks. In 2018 IEEE conference on multimedia information processing and retrieval (MIPR). IEEE, 384--389.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (38)Cross Ref
  25. Falko Matern, Christian Riess, and Marc Stamminger. 2019. Exploiting visual artifacts to expose deepfakes and face manipulations. In 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW). IEEE, 83--92.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (40)Cross Ref
  26. Thanh Thi Nguyen, Quoc Viet Hung Nguyen, Dung Tien Nguyen, Duc Thanh Nguyen, Thien Huynh-The, Saeid Nahavandi, Thanh Tam Nguyen, Quoc-Viet Pham, and Cuong M Nguyen. 2022. Deep learning for deepfakes creation and detection: A survey. Computer Vision and Image Understanding 223 (2022), 103525.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (42)Digital Library
  27. Ivan Perov, Daiheng Gao, Nikolay Chervoniy, Kunlin Liu, Sugasa Marangonda, Chris Umé, Mr Dpfks, Carl Shift Facenheim, Luis RP, Jian Jiang, et al. 2020. DeepFaceLab: Integrated, flexible and extensible face-swapping framework. arXiv preprint arXiv:2005.05535 (2020).Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (44)
  28. Albert Pumarola, Antonio Agudo, Aleix M Martinez, Alberto Sanfeliu, and Francesc Moreno-Noguer. 2018. Ganimation: Anatomically-aware facial animation from a single image. In Proceedings of the European conference on computer vision (ECCV). 818--833.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (45)Digital Library
  29. Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, and Daniel Cohen-Or. 2021. Encoding in style: a stylegan encoder for image-to-image translation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2287--2296.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (47)Cross Ref
  30. Nataniel Ruiz, Sarah Adel Bargal, and Stan Sclaroff. 2020. Disrupting deepfakes: Adversarial attacks against conditional image translation networks and facial manipulation systems. In Computer Vision-ECCV 2020 Workshops: Glasgow, UK, August 23--28, 2020, Proceedings, Part IV 16. Springer, 236--251.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (49)Digital Library
  31. Eran Segalis and Eran Galili. 2020. OGAN: Disrupting deepfakes with an adversarial attack that survives training. arXiv preprint arXiv:2006.12247 (2020).Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (51)
  32. Ali Shafahi, Mahyar Najibi, Zheng Xu, John Dickerson, Larry S Davis, and Tom Goldstein. 2020. Universal adversarial training. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 5636--5643.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (52)Cross Ref
  33. Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. 2016. Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713 (2016).Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (54)
  34. Pu Sun, Yuezun Li, Honggang Qi, and Siwei Lyu. 2020. Landmark breaker: obstructing deepfake by disturbing landmark extraction. In 2020 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, 1--6.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (55)Cross Ref
  35. Matthew Tancik, Ben Mildenhall, and Ren Ng. 2020. Stegastamp: Invisible hyperlinks in physical photographs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2117--2126.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (57)Cross Ref
  36. Chaofei Yang, Leah Ding, Yiran Chen, and Hai Li. 2021. Defending against gan-based deepfake attacks via transformation-aware adversarial faces. In 2021 international joint conference on neural networks (IJCNN). IEEE, 1--8.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (59)
  37. Xin Yang, Yuezun Li, and Siwei Lyu. 2019. Exposing deep fakes using inconsistent head poses. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 8261--8265.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (60)Cross Ref
  38. Yuankun Yang, Chenyue Liang, Hongyu He, Xiaoyu Cao, and Neil Zhenqiang Gong. 2021. Faceguard: Proactive deepfake detection. arXiv preprint arXiv:2109.05673 (2021).Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (62)
  39. Chin-Yuan Yeh, Hsi-Wen Chen, Shang-Lun Tsai, and Sheng-De Wang. 2020. Disrupting image-translation-based deepfake algorithms with adversarial attacks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops. 53--62.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (63)Cross Ref
  40. Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. 2018. Hidden: Hiding data with deep networks. In Proceedings of the European conference on computer vision (ECCV). 657--672.Google ScholarSugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (65)Digital Library

Cited By

View all

Sugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (67)

    Index Terms

    1. Sugar-coated poison defense on deepfake face-swapping attacks

      1. Security and privacy

        1. Human and societal aspects of security and privacy

          1. Privacy protections

      Recommendations

      • Investigate Evolutionary Strategies for Black-box Attacks to Deepfake Forensic Systems

        SoICT '22: Proceedings of the 11th International Symposium on Information and Communication Technology

        Recently, the rising of deepfake generation techniques, cyber security against misinformation has become a popular topic among the research community. To improve the robustness of deepfake detection, attacks such as adversarial examples are studied ...

        Read More

      • DDoS attacks and defense mechanisms: classification and state-of-the-art

        Denial of Service (DoS) attacks constitute one of the major threats and among the hardest security problems in today's Internet. Of particular concern are Distributed Denial of Service (DDoS) attacks, whose impact can be proportionally severe. With ...

        Read More

      • Adversarial Attacks onDeepfake Detectors: A Practical Analysis

        MultiMedia Modeling

        Abstract

        In this day and age, fake images can be easily generated using some of the state-of-the-art Generative Adversarial Networks (GANs), including Deepfake. These fake images, along with unfactual content, can pose a substantial threat to our society ...

        Read More

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      Get this Publication

      • Information
      • Contributors
      • Published in

        Sugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (68)

        AST '24: Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024)

        April 2024

        235 pages

        ISBN:9798400705885

        DOI:10.1145/3644032

        • Chair:
        • Francesca Lonetti,
        • Proceedings Chair:
        • Antonio Guerriero

          Università degli Studi di Napoli Federico II, Italy

          ,
        • Program Chair:
        • Mehrdad Saadatmand,
        • Program Co-chairs:
        • Christof J. Budnik

          Siemens Technology, USA

          ,
        • Jenny Li

          Kean University, USA

        Copyright © 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM.

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [emailprotected].

        Sponsors

          In-Cooperation

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 10 June 2024

            Check for updates

            Sugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (74)

            Author Tags

            • poison defense
            • deepfake
            • face-swapping

            Qualifiers

            • research-article

            Conference

            Upcoming Conference

            ICSE 2025

            2025 IEEE/ACM 46th International Conference on Software Engineering

            April 26 - May 3, 2025

            Ottawa , ON , Canada

            Funding Sources

            • Sugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (75)

              Other Metrics

              View Article Metrics

            • Bibliometrics
            • Citations0
            • Article Metrics

              • Total Citations

                View Citations
              • 8

                Total Downloads

              • Downloads (Last 12 months)8
              • Downloads (Last 6 weeks)8

              Other Metrics

              View Author Metrics

            • Cited By

              This publication has not been cited yet

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            Digital Edition

            View this article in digital edition.

            View Digital Edition

            • Figures
            • Other

              Close Figure Viewer

              Browse AllReturn

              Caption

              View Table of Contents

              Export Citations

                Your Search Results Download Request

                We are preparing your search results for download ...

                We will inform you here when the file is ready.

                Download now!

                Your Search Results Download Request

                Your file of search results citations is now ready.

                Download now!

                Your Search Results Download Request

                Your search export query has expired. Please try again.

                Sugar-coated poison defense on deepfake face-swapping attacks | Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024) (2024)
                Top Articles
                Latest Posts
                Article information

                Author: Ray Christiansen

                Last Updated:

                Views: 5516

                Rating: 4.9 / 5 (49 voted)

                Reviews: 88% of readers found this page helpful

                Author information

                Name: Ray Christiansen

                Birthday: 1998-05-04

                Address: Apt. 814 34339 Sauer Islands, Hirtheville, GA 02446-8771

                Phone: +337636892828

                Job: Lead Hospitality Designer

                Hobby: Urban exploration, Tai chi, Lockpicking, Fashion, Gunsmithing, Pottery, Geocaching

                Introduction: My name is Ray Christiansen, I am a fair, good, cute, gentle, vast, glamorous, excited person who loves writing and wants to share my knowledge and understanding with you.