Object detection has long been an important research topic in computer vision area. It forms the basis of many applications. Despite the great... Show moreObject detection has long been an important research topic in computer vision area. It forms the basis of many applications. Despite the great progress made in recent years, object detection is still a challenging task. One of the keys to improving the performance of object detection is to utilize the contextual information from the image itself or from a video sequence. Contextual information is defined as the interrelated condition in which something exists or occurs. In object detection, such interrelated condition can be related background/surroundings, support from image segmentation task, and the existence of the object in the temporal domain for video-based object detection. In this thesis, we propose multiple methods to exploit contextual information to improve the performance of object detection from images and videos.First, we focus on exploiting spatial contextual information in still-image based object detection, where each image is treated independently. Our research focuses on extracting contextual information using different approaches, which includes recurrent convolutional layer with feature concatenation (RCL-FC), 3-D recurrent neural networks (3-D RNN), and location-aware deformable convolution. Second, we focus on exploiting pixel-level contextual information from a related computer vision task, namely image segmentation. Our research focuses on applying a weakly-supervised auxiliary multi-label segmentation network to improve the performance of object detection without increasing the inference time. Finally, we focus on video object detection, where the temporal contextual information between video frames are exploited. Our first research involves modeling short-term temporal contextual information using optical flow and modeling long-term temporal contextual information using convLSTM. Another research focuses on creating a two-path convLSTM pyramid to handle multi-scale temporal contextual information for dealing with the change in object's scale. Our last work is the event-aware convLSTM that forces convLSTM to learn about the event that causes the performance to drop in a video sequence. Show less