<?xml version='1.0' encoding='utf-8'?>
<mods xmlns="http://www.loc.gov/mods/v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="3.7" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-7.xsd">
   <name>
      <role>
         <roleTerm type="text" authority="marcrelator" authorityURI="http://id.loc.gov/vocabulary/relators" valueURI="http://id.loc.gov/vocabulary/relators/cre">creator</roleTerm>
      </role>
      <namePart>Kim, Taeho</namePart>
   </name>
   <titleInfo>
      <title>Voxel Transformer with Density-Aware Deformable Attention for 3D Object Detection</title>
   </titleInfo>
   <originInfo>
      <dateCreated keyDate="yes">2023</dateCreated>
   </originInfo>
   <note displayLabel="Degree Awarded">Spring 2023</note>
   <typeOfResource authority="aat" valueURI="http://vocab.getty.edu/page/aat/300028029">Thesis</typeOfResource>
   <name type="corporate">
      <affiliation>Illinois Institute of Technology</affiliation>
   </name>
   <name type="corporate">
      <namePart>ECE / Electrical and Computer Engineering</namePart>
   </name>
   <name authority="wikidata" authorityURI="https://www.wikidata.org" valueURI="https://www.wikidata.org/wiki/Q102410753">
      <role>
         <roleTerm type="text" authority="marcrelator" authorityURI="http://id.loc.gov/vocabulary/relators" valueURI="http://id.loc.gov/vocabulary/relators/cre">advisor</roleTerm>
      </role>
      <namePart>Kim, Joohee</namePart>
   </name>
   <subject>
      <topic>Electrical engineering</topic>
   </subject>
   <subject>
      <topic>3D object detection</topic>
   </subject>
   <subject>
      <topic>Deep learning</topic>
   </subject>
   <subject>
      <topic>Deformable attention</topic>
   </subject>
   <subject>
      <topic>Density</topic>
   </subject>
   <subject>
      <topic>Point cloud</topic>
   </subject>
   <subject>
      <topic>Transformer</topic>
   </subject>
   <language>
      <languageTerm type="code" authority="rfc3066">en</languageTerm>
   </language>
   <abstract>The Voxel Transformer (VoTr) is a prominent model in the field of 3D object detection, employing a transformer-based architecture to comprehend long-range voxel relationships through self-attention. However, despite its expanded receptive field, VoTr’s flexibility is constrained by its predefined receptive field. In this paper, we present a Voxel Transformer with Density-Aware Deformable Attention (VoTr-DADA), a novel approach to 3D object detection. VoTr-DADA leverages density-guided deformable attention for a more adaptable receptive field. It efficiently identifies key areas in the input using density features, combining the strengths of both VoTr and Deformable Attention. We introduce the Density-Aware Deformable Attention (DADA) module, which is specifically designed to focus on these crucial areas while adaptively extracting more informative features. Experimental results on the KITTI dataset and the Waymo Open dataset show that our proposed method outperforms the baseline VoTr model in 3D object detection while maintaining a fast inference speed.</abstract>
   <physicalDescription>
      <digitalOrigin>born digital</digitalOrigin>
      <internetMediaType>application/pdf</internetMediaType>
   </physicalDescription>
   <accessCondition type="useAndReproduction" displayLabel="rightsstatements.org">In
                Copyright</accessCondition>
   <accessCondition type="useAndReproduction" displayLabel="rightsstatements.orgURI">http://rightsstatements.org/page/InC/1.0/</accessCondition>
   <accessCondition type="restrictionOnAccess">Restricted Access</accessCondition>
<identifier type="hdl">http://hdl.handle.net/10560/islandora:1025142</identifier></mods>