Search results
(1 - 20 of 91)
Pages
- Title
- HIGH PERFORMANCE, HIGH STABILITY AND LOW POWER SRAM DESIGN BY USING CARBON NANOTUBE FIELD EFFECT TRANSISTORS
- Creator
- Wang, Wei
- Date
- 2012-07-07, 2012-07
- Description
-
As the feature size of silicon semiconductor devices scales down to nanometer range, planar bulk CMOS design and fabrication encounter...
Show moreAs the feature size of silicon semiconductor devices scales down to nanometer range, planar bulk CMOS design and fabrication encounter significant challenges. This situation is exacerbated when it comes to SRAM, as SRAM takes a large part of power consumption and area overhead in modern VLSI processor designs. To achieve higher performance, stability and lower power consumption, carbon nanotube (CNT) has been introduced to SRAM design as an alternative material. The semiconducting single-walled CNTs are promising candidates for the channel material of CMOS devices because of two advantages over the other semiconductor materials: high ON current, leading to high speed and low OFF current, leading to less leakage power. In this research work, characterizing work of technology parameters for 6T carbon nanotube field effect transistor (CNFET) SRAM cell is performed for basic understanding of the relationship between SRAM delay/power and CNFET technology parameters. Stability issue is studied by investigating the diameter and transistor ratio impacts on the SRAM static noise margin (SNM). A stability-optimized 6T CNFET SRAM cell achieves 38.88% reading delay reduction, 21.61% writing delay reduction, 85.65% reading power reduction, 5.88% writing power reduction, 97.80% leakage power reduction, 41.41% SNM increment, 91.23% reading power-delay product (PDP) reduction and 26.23% writing PDP reduction, compared with conventional silicon MOSFET SRAM cell. To mitigate major CNT imperfection impacts on CNFET circuits, a misalignment immune SRAM design method is proposed to eliminate CNT misalignment problem by using etching region defined in circuit layout; and a diameter variation sensing and compensating system is designed to mitigate the negative impacts of CNT diameter variation on SRAM delay and power consumption. A hybrid silicon/CNT 4T SRAM cell design is proposed for low-power high-density cache application, which is better than conventionally used 6T SRAM in terms of power consumption and circuit area. Finally, a design flow of high performance, high stability and low power SRAM is summarized.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- NOVEL FAULT DIAGNOSTIC TECHNIQUE AND UNIVERSAL SENSOR FOR PERMANENT MAGNET ELECTRIC MACHINES USING SEARCH COILS
- Creator
- Da, Yao
- Date
- 2012-04-23, 2012-05
- Description
-
Over the past decade, permanent magnet synchronous machines (PMSMs) have gained significant popularity in industry, such as wind turbines and...
Show moreOver the past decade, permanent magnet synchronous machines (PMSMs) have gained significant popularity in industry, such as wind turbines and electric vehicles, owing to their high efficiency, high output power to volume ratio, and high torque to current ratio. In these mission critical applications, an unexpected fault or failure of the machine could lead to very high repair or replacement cost, or even catastrophic system failure. Therefore a robust and reliable health monitoring and fault diagnostic approach is desired, which could help in scheduling preventive maintenance to lengthen their lifespan and avoid machine failure. This dissertation presents a novel multi-faults diagnostic approach using search coils. These search coils are wound around armature teeth, so they typically need to be installed during manufacturing. But its immunity to high frequency harmonics makes it suitable for inverter/rectifier fed motors or generators, such as wind turbines and automotive systems. In addition, this method does not require the knowledge of proprietary constructional details of the machine. Since the electromagnetic flux is directly measured in this method, it provides much more information than any other scheme: the direction of eccentricity and the location of shorted windings. Furthermore, this method is also capable of evaluating the severity of each fault, which is of significant importance in mission critical applications such as automotive, aerospace and military applications. In addition to these uses, the search coils can be used as a universal sensor to estimate phase current or rotor position, which are critical information in a PMSM close-loop control, which allow it to work as a backup sensor for fault tolerant operation. The proposed fault detection scheme and universal sensor concept have been tested under several scenarios with Finite Element Analysis and experimentally validated.
Ph.D. in Electrical Engineering, May 2012
Show less
- Title
- OPTIMIZATION AND MARKET CLEARING IN THE POWER SYSTEMS WITH HIGH-LEVEL RENEWABLES
- Creator
- Ye, Hongxing
- Date
- 2016, 2016-05
- Description
-
The increasing penetration of renewable energy source (RES), such as wind and solar power generation, in recent years to meet various...
Show moreThe increasing penetration of renewable energy source (RES), such as wind and solar power generation, in recent years to meet various renewable portfolio standards (RPS) has led to more uncertainties in power systems. The RES penetration level is expected to further increase in order to reduce emission and fight climate change. The growing uncertainties caused by RES pose new challenges in power and energy systems. Advanced models and technologies are urgent to provide secure, affordable and clean energy to customers. Security-Constrained Unit Commitment (SCUC) problem is one of the most important tools in the modern power system. It determines the optimal short-term generating planning. The electricity is priced and settled based on its solution. In order to manage the uncertainties caused by renewables, it is urgent to develop new SCUC models and solution approaches. SCUCs considering the uncertainties have become a focus of research in recent years. The proposed optimistic robust SCUC combines the idea of robust optimization and reserve concept in the electricity. The merit of the robust optimization is that its solution can be immunized against any uncertainties. It exactly meets the first priority, reliability, in power system operation. The robust optimization is attractive in theory. However, a solution is robust if and only if the system can survive in the worst case scenario. Hence, the key task is to identify the worst case scenario. Unfortunately, finding the worst case scenario in general is a non-deterministic polynomial-time hard (NP-hard) problem. This will create issue in satisfying the timeliness requirement that the optimal scheduling must be obtained quickly (e.g., within several hours) in the day-ahead electricity markets. This dissertation proposes a fast solution approach to finding the worst case scenario by exploring the special structure in the SCUC problem. This dissertation proposes a new market mechanism for managing uncertain ties caused by high-level RES based on the robust optimization. A new concept, Uncertainty Marginal Price (UMP), is proposed to charge uncertainty sources and to credit flexible sources. For the first time, explicit price signals are provided and utilized to manage any level uncertainties within a robust optimization framework. The proposed mechanism manages uncertainties from both the source side (uncertainty reduction) and resource side (uncertainty accommodation). In short term, it provides incentives for RES operators to improve forecasting accuracy (i.e. to reduce uncertainties) and existing flexible resources (e.g. storage) to participate the uncertainty accommodation. In long term, the proposed mechanism provides price signals for siting new flexible resources (e.g. energy storage) to accommodate uncertainties from increasing RES penetration.
Ph.D. in Electrical Engineering, May 2016
Show less
- Title
- SPATIO-TEMPORAL RECONSTRUCTION FOR GATED CARDIAC SPECT
- Creator
- Niu, Xiaofeng
- Date
- 2011-07, 2011-07
- Description
-
In myocardial perfusion imaging using single photon emission computed tomography (SPECT), gated acquisition is often used in order to deal...
Show moreIn myocardial perfusion imaging using single photon emission computed tomography (SPECT), gated acquisition is often used in order to deal with blur caused by cardiac motion in the resulting images. While this can provide useful information about the myocardial function, it also inevitably leads to reduced signal-to-noise ratio in the acquired data due to gating. In this work, we aim to investigate and evaluate image reconstruction methods for improving the quality of the reconstructed images in cardiac gated SPECT imaging. First, we propose a spatio-temporal (aka 4D) reconstruction procedure for gated images based on use of discrete Fourier transform (DFT) basis functions, wherein the image activity at each spatial location is regulated by a Fourier representation along the gate dimension. The gated images are then reconstructed through determination of the coefficients of the Fourier representation. We explore two different reconstruction algorithms, one is a penalized least-square approach and the other is a maximum a posteriori approach. Our simulation results demonstrate that use of DFT-basis functions in gated imaging can improve the accuracy of the reconstruction. While in gated imaging the tracer distribution is traditionally treated as constant, a recent development is gated dynamic imaging where the goal is to obtain an image sequence from a single acquisition which shows simultaneously both cardiac motion and tracer distribution change over the course of imaging. In this work, we further develop and demonstrate a fully 5D (3D space plus time plus gate) reconstruction procedure for cardiac gated, dynamic SPECT imaging, where the challenge is even greater without the use of multiple fast camera rotations. We develop and compare two iterative reconstruction algorithms: one is based on the modified block sequential regularized EM (BSREM-II) algorithm, and the other is based on the Bsplines algorithm. Our simulation results demonstrate that the 5D reconstruction xiii procedure can yield gated dynamic images which show quantitative information for both perfusion defect detection and cardiac motion. Based upon the success of 5D reconstruction, we also study the saliency of 5D images for detection of perfusion defects. We explore efficient ways for characterization and visualization of information pertinent to perfusion defects in a 5D image sequence. We apply various metrics to quantify the degree to which perfusion deficits can be detected. We show that these metrics can be used to produce new types of visualizations, showing wall motion and perfusion information, that may potentially be useful for clinical evaluation. Finally, with the ultimate goal of effective detection of lesion defect for clinical use, we also investigate a direct reconstruction approach to determine a sequence of gated, kinetic parameter images from a single acquisition, which can provide information simultaneously for both tracer kinetics and wall motion. To combat the greatly under-determined nature of the problem, we apply smoothness constraints to exploit the similarity both among the different gates and among the local spatial neighborhood. The parameter images of the different gates are then determined jointly using maximum a posteriori estimation from all the available image data.
Ph.D. in Electrical Engineering, July 2011
Show less
- Title
- Large Scale Integration of Sustainable Energy and Congestion Management in Western Interconnection
- Creator
- Aflaki Khosrosha, Kaveh
- Date
- 2012-07-12, 2012-07
- Description
-
Large scale sustainable energy like wind and solar energy integration to the bulk grid could introduce inevitable challenges to regional...
Show moreLarge scale sustainable energy like wind and solar energy integration to the bulk grid could introduce inevitable challenges to regional transmission and generation systems. The most important challenges for transmission system are the congestion management and planning for transmission expansion to transfer the zero cost generated electricity. Another big challenge is competition of current fuel based generation units in the electricity market with zero cost sustainable energy. In this dissertation all these challenges identified and analyzed for large scale grid. This thesis brings a new method used to study transmission congestions in Western Interconnection of the United States. The process involved Security-Constrained Unit Commitment (SCUC) formulation applying its results for analysis of transmission congestion. This thesis also presents results and findings in simulation of the system operation in the Western Interconnection of the United States with the inclusion of large scale wind and solar energy integration for year 2030. High level of wind and solar energy with the forecasted wind and solar time series profiles were integrated to the Western Interconnection grid. Their impact on different existing types of generation plants is studied. The sensitivity of the fuel prices, wind turbine power output, load volatility and demand side management as well as carbon tax are analyzed in different possible scenarios. In order to incorporate large scale of sustainable energy into a bulk electricity grid footprint, planned transmission expansion showed need to take place. Transmission expansion reduces grid congestion and balances Locational Marginal Prices (LMP). This thesis explores the advancements in high-performance computing and visual analytics of economic-based transmission expansion in the Western Electricity Coordinating Council (WECC). This expansion is based on 2018 and 2029 forecasted data. It identifies transmission congestions and average of LMP for each area, and expands the transmission system while accommodating large scale wind and solar energy to achieve the Department of Energy’s renewable energy vision for year 2030. An iterative transmission expansion analysis, based on the average LMP for each area, is used to identify the minimum WECC transmission lines required. All results are visualized on the Geographical Information System (GIS) format map of North America.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- DISTRIBUTED VIDEO CODING FOR RESOURCE CONSTRAINED VIDEO APPLICATIONS
- Creator
- Liu, Wenhui
- Date
- 2014, 2014-05
- Description
-
Video coding technology has played a key role in the explosion of current multimedia society with increasing resolution and quality. Such big...
Show moreVideo coding technology has played a key role in the explosion of current multimedia society with increasing resolution and quality. Such big success is largely built on the conventional video coding paradigm where motion estimation and compensation are performed at the encoder. This asymmetry in complexity is well-suited for the applications where the video sequence is encoded once and decoded many times. However, some new emerging applications such as wireless video surveillance, wireless PC cameras and multimedia sensor networks require a low complexity encoding, while possibly a ording a high complexity decoding. Therefore, a challenging problem emerges with the new type of visual communication system is how to achieve low complexity encoding video compression while maintaining good coding e ciency. Distributed video coding (DVC) provides low complexity encoding solutions for video communication with limited computational power or energy constraints. In DVC, the source video information is independently encoded at lightweight encoders. At the decoder, all the received bitstreams are jointly exploited their statistical dependencies between them. In such a way, motion estimation and its computational complexity is shifted from the encoder to the decoder. However, DVC also has its own restrictions. The low coding e ciency remains a challenging issue for DVC compare to the conventional video coding. Although DVC is robust to channel loss due to its intrinsic feature of independent encoders and joint decoder, the error resiliency for medium to large transmission errors is weak. In this dissertation, previously proposed low-complexity DVC (LC-DVC) architecture is rstly introduced. After that, a continued work is presented to further improve quality of SI. The proposed method is called spatio-temporal joint bilateral upsampling (STJBU) based SI generation, where geometric closeness of pixels and their photometric similarity is exploited to reduce the noise while preserving the edge xiv information. Moreover, a distributed multiple description coding (DMDC) scheme is proposed by combining the multiple description (MD) coding into LC-DVC to improve its error resiliency. All the proposed schemes are well described and the ratedistortion analyses are presented in this dissertation. All these features have made the LC-DVC a great solution for resource constraints applications.
PH.D in Electrical Engineering, May 2014
Show less
- Title
- POWER OPTIMIZATION IN DEEP SUBMICRON VLSI CIRCUITS: FROM SYSTEM LEVEL TO CIRCUIT LEVEL
- Creator
- Tong, Qiang
- Date
- 2017, 2017-07
- Description
-
As VLSI technology advances to deep sub-micron regime, power consumption has become a critical concern in VLSI circuits. Therefore, power...
Show moreAs VLSI technology advances to deep sub-micron regime, power consumption has become a critical concern in VLSI circuits. Therefore, power optimization becomes mandatory in VLSI design nowadays. To reduce the power consumption, many techniques have been proposed at various levels of VLSI circuits design: system level, register-transfer level(RTL), and circuit/transistor level. This dissertation starts with a review of system level power optimization techniques. Experiments on a computer architecture simulation system have been conducted to compare the impact of different programming styles at system level on power consumption. The results could be used as an intuitive guidance for programmers with intention for implementing power-aware system. The second topic in this dissertation is a clustering based clock gating technique, targeting power reduction at RT-Level. Clock gating is an effective and popular method to reduce dynamic power in VLSI circuits, it can be applied at both RT-level and gate level. The basic idea of clock gating is to disable the clock of one or more sequential logics (majorly flip-flops) when the input data of the logic cells do not change. In this dissertation, a clustering based clock gating technique is proposed, the technique exploits activity information of each flip-flop, and clusters them into groups according to their activity correlations. As the leakage power has become a major concern in VLSI design, the proposed As the leakage power has become a major concern in VLSI design, the proposed clustering method is extended down to gate level and a clustering based hybrid clock gating and power gating technique is proposed. The technique can reduce both the dynamic power and leakage power in VLSI circuits. As process technology scaling down to deep submicron regime, bulk CMOS technology has encountered many challenges due to short channel effect (SCE), which degrades the reliability and feasibility of MOSFET devices. New technologies such as FinFET and carbon nanotube FET (CNFET) are two promising substitute solutions in the following decade to address SCE issue. Part of this dissertation presents circuit design using these new process technologies for low power VLSI circuits. More specifically, two SRAM cell designs using FinFET and CNFET devices are proposed. The new designs can improve performance while reduce power consumption.
Ph.D. in Electrical Engineering, July 2017
Show less
- Title
- LARGE-SCALE SIMULATION OF ELECTRIC POWER SYSTEMS FOR WIND
- Creator
- Wei, Tian
- Date
- 2011-08, 2011-07
- Description
-
The utilization of wind energy will pose great socioeconomic benefits with reductions in power plant emissions and the supply of zero cost...
Show moreThe utilization of wind energy will pose great socioeconomic benefits with reductions in power plant emissions and the supply of zero cost energy; however, largescale wind energy integration could introduce inevitable challenges to regional transmission systems and hourly system operations. This thesis addresses the congestion identification, simulation and analysis of large-scale electric power systems in different scenarios, large-scale wind energy integration and related transmission expansion issues. A methodology based on the security-constrained unit commitment (SCUC) is applied to analyze the transmission congestions in the Eastern Interconnection of the United States. The identified congestions are visualized along with the Geographical Information System (GIS) data and compared with the results in National Electric Transmission Congestion Study (NETCS) published by the Department of Energy of the United States in 2006. The study also provides the locational marginal price (LMP) information in the Eastern Interconnection, which is not available in the NETCS report. This thesis implements a comprehensive simulation and scenario analysis of the Illinois electric power system for the year 2011. Possible scenarios representing electrical load sensitivities to economic growth, fuel price variations, and the impact of carbon cost, are studied. This thesis presents the hourly simulation results for the large-scale wind energy integration in the Eastern Interconnection of the United States. An hourly unit commitment is applied for the simulation of the economics of wind energy integration in the year 2030. The energy portfolio for supplying the hourly load in 2030 is developed based on wind integration levels. The sensitivities of fuel price, wind energy quantity, xvii load forecast, carbon cost, and load management to the proposed 2030 wind integration are studied. This thesis identifies transmission congestions and expands the existing transmission system in the Eastern Interconnection of the United States for accommodating a large-scale integration of wind energy. Violated transmission flows which would cause the infeasibility of hourly SCUC are identified. An iterative transmission expansion analysis is implemented to identify the minimum required additions to the Eastern Interconnection for mitigating hourly transmission congestions.
Ph.D. in Electrical Engineering, July 2011
Show less
- Title
- DEPTH MAP PROCESSING FOR MULTI-VIEW VIDEO PLUS DEPTH
- Creator
- Vijayanagar, Krisha Rao
- Date
- 2014, 2014-05
- Description
-
The world of multimedia and visual entertainment has grown in leaps and bounds in the past decade with 3-D television being one of the biggest...
Show moreThe world of multimedia and visual entertainment has grown in leaps and bounds in the past decade with 3-D television being one of the biggest technologies. Amongst several formats proposed for representing 3-D content, multi- view video plus depth (MVD) format has gained a lot of interest in the past few years. MVD requires that each view of a particular scene be accompanied by a per-pixel depth. This introduces new problems for compression and transmission of MVD content because a depth map has di erent characteristics from a color image. Keeping the MVD format and depth map characteristics in mind, we highlight three majors problems that plague the MVD format, namely, 1. depth map re nement. 2. depth map compression. 3. novel view synthesis using the depth map at the decoder side. In order to re ne a depth map, we propose a multi-resolution anisotropic di usion algorithm that is optimized to run in real-time thus ensuring that the encoder does not su er from additional latency. Next, we propose two unique solutions for compressing them. We rst propose a solution using the Layered Depth Video (LDV) concept using a rate-distortion optimized quadtree decomposition of the LDV using a novel two-mode block truncation code with improved prediction. We also propose a compression solution using compressive sensing (CS) concepts by creating a hybrid rate-optimized CS codec. This codec achieves two goals:- rstly, block classi cation to ensure lower decoder complexity and secondly, rate-distortion optimization of the measurement rate for each block that is to be compressively sensed. We then look at the view synthesis component of the MVD tool-chain which x is a time-sensitive process. Keeping decoding latency in mind, we propose a lookup- table based approach to the 3-D warping process with a simpli ed hole- lling algorithm that is not only competitive quality-wise with other schemes but is several times faster too. It is hopeful that the presented techniques can be used successfully to create MVD architectures for applications that need low-complexity encoding solutions.
PH.D in Electrical Engineering, May 2014
Show less
- Title
- COMPUTER MODELING OF BREAST LESIONS AND STUDIES OF ANALYZER-BASED X-RAY IMAGING
- Creator
- Garcia, Luis De Sisternes
- Date
- 2011-11, 2011-12
- Description
-
Phase-contrast x-ray imaging is an emerging technique that promises to yield highly sensitive medical images of soft tissue, which is...
Show morePhase-contrast x-ray imaging is an emerging technique that promises to yield highly sensitive medical images of soft tissue, which is difficult to observe via conventional radiography given its low X-ray attenuation differences. One of these phase-contrast techniques, known as analyzer-based imaging, has demonstrated that highly detailed breast tissue images can be obtained using synchrotron radiation. However, synchrotron facilities are impractical for clinical use. This thesis introduces studies and exposure consideration towards the application of analyzer-based imaging in a clinical environment, particularly in the context of breast imaging. It also introduces a computational breast lesion model that generates randomized three-dimensional phantoms which follow realistically the characteristics observed in real lesions. Moving analyzer-based imaging to clinical application requires the consideration of photon noise, inherent from the use of a photon-limited conventional source. We summarize the statistical properties in the presence of photon noise of two popular analyzer-based imaging techniques, known as diffraction-enhanced imaging (DEI) and multiple-image radiography (MIR). The statistics for MIR have not been previously derived and are introduced in this thesis. Comparison of the resulting statistical predictions with results obtained by Monte Carlo simulation validated the analysis. An expression for the maximum-likelihood (ML) solution for analyzer-based imaging is presented as a way of minimizing the effects of photon noise in the reconstruction of the object’s absorption, refraction and ultra-small angle scattering properties, and more practical maximum-likelihood expectation-maximization (ML-EM) and maximum-a-posteriori expectation-maximization (MAP-EM) solutions are also introduced. The behavior of the ML-EM and MAP-EM solutions was compared to the results produced by the five best-known analyzer-based reconstruction methods using computer simulations. The ML-EM and MAP-EM reconstructions proved closer to the theoretical values as they do not rely on commonly known limitations and approximations introduced by the other techniques. We introduce the development and evaluation of a new computational breast lesion phantom model that can simulate either massess or microcalcifications. The proposed tool allows the generation of a large number of randomized three-dimensional breast lesion simulations following desired characteristics normally used to describe breast lesions in clinical practice. The initial motivation for the development of this new phantom model was to enable the proposed evaluations of analyzer-based imaging to be achieved. However, the model became a major focus of this thesis because it improves significantly upon those that can be found in previous literature. The proposed lesion model can be used for evaluation studies across different breast imaging techniques, as well as for training purposes, so it is our hope that it could become an important resource for the broader mammography research community. As part of the lesion modeling research, we also introduce methods to computationally modify experimental mammography and analyzer-based images of breast tissue so that they present the generated tumor simulations embedded within their parenchyma realistically. The realism of the simulated lesion images was evaluated by comparison of 83 real tumor cases observed in mammograms with 83 constructed hybrid images in which simulated tumors matching the characteristics observed in the real cases were embedded, with healthy tissue acting as background. As a quantitative comparison, extracted features describing tumor shape and density showed no statistically significant differences between real and simulated tumors. A known computational tumor classification technique based on their shape observed in mammography was implemented and showed no significant performance differences between real and simulated cases, as well as showing good correlation with previously published performance results in real tumors. To measure the realism for use in human observer studies, we conducted a reader study in which 5 experienced radiologists were asked to judge whether each of the 166 images was real or simulated by assigning a score on a 7-point scale. The results were analyzed in a multiple-reader multiple-case statistical framework. The conclusion of the study was that the readers’ accuracy in assessing whether the lesions were real or simulated was not significantly better than random chance. This thesis also incorporates a reader study to evaluate the degree to which photon-limited analyzer-based images may be effective for visualization of breast cancer features. Our motivation was to establish the x-ray intensity that would be required to make these methods feasible, the purpose being to serve as a guide in parameter selection for future design of imaging hardware. We conducted a series of observer studies that quantify the performance of analyzer-based refraction images at different noise levels for the task of identifying subtle details present in breast tumors which are relevant to clinical diagnosis. The cases shown to the readers consisted of hybrid images where simulated lesions of known characteristics were computationally embedded in real breast analyzer-based background images. The original phase-contrast data was obtained using synchrotron radiation and was later modified to simulate the noise and blurring effects produced from a photon-limited source with a 300μm aperture size, similar to those used in a laboratory environment. Results showed that the analyzer-based imaging techniques statistically outperformed conventional mammography for the given task with an average of just 128 recorded photons per pixel in background image regions
Ph.D. in Electrical Engineering, December 2011
Show less
- Title
- SPECTRUM OBSERVATORY BASED TRAFFIC MODELING AND CHANNEL SELECTION IN SUPPORT OF DYNAMIC SPECTRUM ACCESS
- Creator
- Bacchus, Brent Roger
- Date
- 2015, 2015-05
- Description
-
It is well known that the exponential growth in popularity of wireless devices has created a demand for radio spectrum that cannot be met with...
Show moreIt is well known that the exponential growth in popularity of wireless devices has created a demand for radio spectrum that cannot be met with current regulatory policies. Despite the difficulty in procuring access to new spectrum resources, many empirical studies have indicated that the majority of spectrum is in-fact unused in the temporal, spatial and/or spectral domains, representing an untapped wealth that must be exploited. Dynamic Spectrum Access (DSA) is a promising technology which aims to improve the efficiency of future radios and alleviate the issue of spectrum under-utilization. This dissertation utilizes the data from the IIT Spectrum Observatory to develop models of channel activity on the Land Mobile Radio (LMR) band (used for critical communication by organizations such as public safety) and shows how such models can be applied to improve the performance of DSA. We demonstrate that LMR traffic may possess multi-timescale behavior – such as clustering and dispersion over different time periods – and propose a novel statistical model to account for these observations based on a multiple emission hidden Markov model. We then used this model to design a collision constrained channel selection algorithm that can permit the re-use of licensed spectrum while minimizing interference with incumbent users. The findings in this work are primarily developed for public safety, however the techniques developed are general enough to be applied to other types of traffic possessing similar characteristics. The proposed model, in particular, is well suited for further analytic work and simulations studies in this area.
Ph.D. in Electrical and Computer Engineering, May 2015
Show less
- Title
- LOG ANALYSIS FOR RELIABILITY MANAGEMENT IN LARGE-SCALE SYSTEMS
- Creator
- Zheng, Ziming
- Date
- 2012-07-16, 2012-07
- Description
-
With the increasing scale and complexity of high performance computing (HPC) systems, reliability management is becoming a major concern....
Show moreWith the increasing scale and complexity of high performance computing (HPC) systems, reliability management is becoming a major concern. System logs are the primary source of information to understand and analyze system problems. Nevertheless, manual log processing is time-consuming, error-prone, and not scalable. Currently little study has been done on automated log analysis for practical use in HPC systems. In this thesis, we present a log analysis infrastructure by exploiting data mining and machine learning technologies. Our work can be broadly divided into four parts: log pre-processing, online failure prediction, automatic root cause diagnosis, and reliability modeling. We evaluate our results by means of system logs collected from production HPC systems. This work can greatly improve our understanding of faults and failures arising from hardware/software components and their interactions. It can further facilitate the reliability management for HPC systems.
Ph.D. in Computer Science, July 2012
Show less
- Title
- APPLICATION OF SPECTRUM OBSERVATORY MEASUREMENTS TO SUPPORT TRAFFIC MODEL-BASED DYNAMIC SPECTRUM ACCESS
- Creator
- Taher, Tanim Mohammed
- Date
- 2014, 2014-07
- Description
-
In a 2012 report, the President’s Council of Advisors in Science and Technology (PCAST) published a memorandum that calls for the...
Show moreIn a 2012 report, the President’s Council of Advisors in Science and Technology (PCAST) published a memorandum that calls for the identification of 1000 MHz of Federal Spectrum to be shared with private (commercial) users. This dissertation proposes a system that employs RF measurements for spectrum usage modeling and Dynamic Spectrum Access (DSA) methodologies that utilize the modeling information to permit sharing of wireless resources. A procedure called the Comprehensive Band Modeling (CBM) procedure is developed that automatically models measured RF data from any band of interest and identifies the locations of signals and holes present in the band. The output of the CBM procedure is summarized in a compact versatile format that makes DSA applications feasible. The research primarily focuses on the 450-474 MHz land mobile radio (LMR) band, and several additional bands like the TV band and the 2.5-2.7 GHz band. However, the research methodology and techniques are broadly applicable to many more frequency ranges. The research has four main areas: (a) spectrum sensor design and measurements, (b) occupancy modeling, (c) communicating the modeling information in a compact form to secondary users to support DSA algorithms and protocols, and (d) tools and metrics for spectrum sharing favorability analysis. Three spectrum sensor platforms were employed in measurements – (1) a spectrum analyzer based Spectrum Observatory (SO) that was developed earlier, (2) a specially purposed software-defined radio (SDR) for measuring LMR channels, (3) and a high-speed and portable SO system based on a sensor called the RFeye. An SO continually measures RF data in a band at a high temporal resolution such that the channel switching activity is seen – like, transmitters turning on and off. Spectrum measurements of the individual RF channels in the 450-474 MHz LMR band and the two commercial bands are used to generate statistical traffic and occupancy models. Long-term measurement data is used to assess how stationary the channel is, and how often the model parameters need to be updated. The spectrum observatory supports a network of Secondary Users (SU) by communicating the traffic model parameters in a compact format to the SUs. The SUs share Primary User (PU) channels via DSA techniques. The DSA algorithms take advantage of the model parameters provided by the SO to maximize SU throughput with limited interference on the PU. The DSA coexistence techniques are evaluated via simulation. The simulation results including Spectrum Opportunity Accessed (SOA), SU throughput, and collision rates are then analyzed to provide an assessment of DSA-based spectrum sharing in that band. The main contribution of this dissertation is the aforementioned CBM procedure. The white spaces in the frequency and time domains, that is, the underutilized spectrum opportunities available for possible secondary use via DSA are automatically identified, as well as the frequency locations that are not conducive to DSA due to the presence of frequent primary licensee transmissions. In CBM, white spaces are referred to as ‘Holes’, and the licensed primary transmission frequencies as ‘Signals’. Useful information about the duty cycles and traffic patterns of incumbent users’ activity within possible secondary use channels is extracted and modeled. The model enables prospective secondary users of white spaces to predict the expected level of interference in any channel, which allows for channel ranking and optimal selection of DSA transmission parameters. The CBM model is describable by a tiered structure, where the first tier identifies the holes and signals; the second tier ranks the holes in terms of available bandwidth and incumbent duty cycle; and the third tier models the infrequent incumbent transmissions. With the three tiers of information, an SU can readily identify all the suitable DSA channels within the entire spectrum band. This essential summary information is retrieved as a “Hole Descriptor Object” (HDO) that is both compact and tractable. Empirical spectrum measurement data obtained from the three different SO platforms is used to test the performance of the CBM procedure in the 2500-2700 MHz frequency range that currently has WiMAX deployments, the TV white space band, and the 450-474 MHz LMR band in Chicago. Spectrum measurement data runs into hundreds of megabytes or gigabytes. As such, the raw information is not very applicable in practical wireless networks. The HDO objects on the other hand are compact and only kilobytes in size. The HDO objects contain all the useful and applicable information necessary for any smart radio (primary or secondary) to select transmission parameters like frequency of operation and bandwidth, so that it can efficiently operate. Thus, the advantage of the CBM procedure is that it summarizes gigabytes of raw spectrum measurements in a usable compact format that can be directly used by practical smart radios to operate using DSA paradigms. Another advantage of CBM is that it is comprehensive and automatically identifies all holes and signals. The research findings are of interest and value to a variety of Federal and Commercial entities. The models and relevant model parameters for public safety radio in the LMR band have been provided on request to the Public Safety and Homeland Security Bureau of the Federal Communications Commission (FCC). The DSA feasibility analysis methodology is of great national economic interest based on the contents of the PCAST report. The PCAST report recommends finding 1000 MHz of federal frequencies to be allocated for shared commercial and federal use. However, the technology for doing so and identifying the suitable bands requires measurements of actual spectrum usage, modeling the occupancy and existing traffic activity, and assessing DSA feasibility – these are important research aspects, and all of which are addressed in this dissertation. The results are of crucial importance to policy makers like the FCC and NTIA who will ultimately make the spectrum allocations decisions. A future network of commercial DSA SU radios operating in a shared band is likely to need access to a system to obtain live information about PU activity to optimally operate in the band with high throughput and low interference. The overall system, based on the CBM procedure and HDO objects, proposed in this thesis describes a framework for providing this information as a service to DSA networks, and hence the work is also of practical relevance to radio system designers.
Ph.D. in Electrical Engineering, July 2014
Show less
- Title
- COMMUNICATION AND COMPUTATION ARCHITECTURES FOR DISTRIBUTED WIRELESS SENSOR NETWORKS AND INTERNET OF THINGS
- Creator
- Yi, Won-jae
- Date
- 2017, 2017-07
- Description
-
Real-time data communication has been viral since the era of the smartphone rose to prominence in this decade. All communications from human...
Show moreReal-time data communication has been viral since the era of the smartphone rose to prominence in this decade. All communications from human to human, from device to human, and from device to device are handled over the Internet connection either through a mobile Internet service provider or Wi-Fi, which enables information exchange including weather service, road traffic conditions, news alerts, package tracking notifications. By looking at different perspectives of the role of a smartphone, it reveals itself as an ideal device to mobilize critical user data to construct a real-time monitoring application such as in remote healthcare and home automation systems. Not only can the smartphone handle real-time data transmissions, but it can also handle real-time computations on the device itself by utilizing its embedded CPU. This dissertation is a comprehensive study of the investigation, exploration and experimentation on a real-time health monitoring system where quality of life can be improved when the conventional system may affect and hamper regular daily activities. The design flow of this system is based on the Internet connection where any device that is communicatively associated with the smartphone can be connected to the Internet. By utilizing the Android smartphone, not only does the system gain real-time data transmission capability, but it also obtains flexibility to communicate with different types of sensors and platforms through multiple wireless protocols. This system is highly adaptable to the currently trending Internet of Things (IoT) standards, where significantly increasing anticipation over its social impact, where it can assist populations in rural and distant areas for healthcare, day-to-day activity monitoring, and prevention against hazardous conditions for workers. The system architecture introduced in this research is focused on reconfigurability and compatibility of wireless sensors where they are independent from a certain platform in which sensors are not limited to medical devices but also detect movement, location, climate condition and any other sensor for analyzing the environment. Four major components are introduced in this research including wireless sensor nodes, a central sensor data processing and communication node, an Android application, and a central database server. They are discussed and explored to seek for solutions to improve and enhance features in the fundamental system design. Communication and computation processing capabilities are evaluated for all major components for practical usage of the system for different case studies. Also as a quantitative case study, a posture and fall detection system is presented which determines the patient's activities, medical conditions and the cause of an emergency event through the integration of all system architecture components. Adapting the IoT system is also explored in this dissertation by introducing a protocol standard to improve data transmission efficiency and to enable cross-platform compatibility of wireless devices. In addition to improving system efficiency, a study on data security issues and assessment on sensor data has been explored by implementing a proposed security scheme to each major component within the real-time mobile monitoring system. Also, a concept of Quality-of-Service (QoS) for mobile monitoring system using a wireless sensor network has been investigated to provide a solution to prioritize sensor data transmissions based on the results obtained from the sensor data assessment application. The proposed solutions can be either implemented on or under the application layer.
Ph.D. in Computer Engineering, July 2017
Show less
- Title
- Optimal Behavior Modeling and Analysis of Electricity Market Participants
- Creator
- Li, Jie
- Date
- 2012-04-27, 2012-05
- Description
-
n restructured electricity power markets, competition among market participants is a key issue of concern for both the ISO (Independent System...
Show moren restructured electricity power markets, competition among market participants is a key issue of concern for both the ISO (Independent System Operator) and the market participants themselves. This dissertation analyzes the market behavior of both the generation side and demand side participants, and provides solution guidelines for devising effective competition strategies for market players’ profit maximization objectives. Generation side is the most competitive part in the electricity market with the unbundling of generation, transmission and distribution. Acting as self-interested entities, GENCOs (Generation Companies) are seeking effective and computationally efficient methodology for generation resource scheduling, while keeping its financial risks at acceptable levels when constituting bidding strategies. To help GENCOs achieve such goal, this dissertation propose a game theory based supply function like bidding model to construct the optimal bidding strategies for GENCOs in both energy and ancillary service markets. On the demand side, demand participation in the electricity market has already been advocated for a long time for its benefit to the entire market and the society as a whole. This dissertation focuses on a specific large electricity consumer type – Internet Data Center (IDC). By analyzing the unique energy consumption pattern for different IDC applications, this dissertation devises effective electric demand management solution for IDCs to conserve electricity energy consumption and cut electric bill, and quantifies the demand response effect of IDC on the electricity market.
Ph.D. in Electrical Engineering, May 2012
Show less
- Title
- ULTRASONIC RANGING AND INFRARED DEPTH PROFILING FOR 3D IMAGE RECOl'STRUCTION AND SCENE ANALYSIS
- Creator
- Jia, Weldi
- Date
- 2013, 2013-07
- Description
-
This doctoral work cannot be done without the help, support and dedication of numerous people. First of all, I will give my great thanks to my...
Show moreThis doctoral work cannot be done without the help, support and dedication of numerous people. First of all, I will give my great thanks to my advisor Dr. Jafar Saniie, who is patient, fundamental and knowledgeable in providing advices, suggestions and guidance to all my six years' study. I would like to express my sincere thanks for his encouragement and nancial support during my study. I will never forget the days and nights he spent with me doing research work in the ECASP research lab. His spirit of carefully searching, friendly talking and knowledgeably thinking stays in my mind forever. My gratitude extends to my committee members, Dr. Anjali, Dr. Moderes and Dr. Oruklu. Also, I would like to give my thanks to my colleagues and friends, especially the people in ECASP research lab, Won-Jae, Sufeng, Thomas, Spenser and Pramod. Their kindness and powerful knowledge in di erent elds help me enhance my work so much. I will never forget the days debugging programs with them and the days we cheered for our success. I would like to dedicate this thesis to my family, especially to my grandfather who just passed away but gave me nancial support and advices from childhood till now, my father who is not able to speak after an accident during my study, my mother who is taking care of my father herself during the past six years, and my wife Wenhui Liu, who encouraged and helped me living in the United States. I promise that I will use what I learned from here to change the world and their constant support of my academic ventures from the beginning to the present would be valuable. Thank you Grandpa, rest in peace in heaven.
PH.D in Electrical Engineering, July 2013
Show less
- Title
- EFFICIENT AND FAIR RESOURCE ALLOCATION FOR OFDMA NETWORKS
- Creator
- Alavi, Seyed Mohamad
- Date
- 2012-11-26, 2012-12
- Description
-
In Orthogonal Frequency Division Multiple Access (OFDMA) systems, resources, including subcarriers, bits and power, need to be adaptively...
Show moreIn Orthogonal Frequency Division Multiple Access (OFDMA) systems, resources, including subcarriers, bits and power, need to be adaptively allocated to users in order to improve spectral efficiency, increase capacity, and reduce power consumption, while satisfying the Quality of Service (QoS) requirements for users. Most of the previous works concentrate on satisfying rate and power requirements, however providing delay requirement is also necessary, especially with increasing demand on delay-sensitive applications. We first model the resource allocation problem as a cross-layer optimization problem considering the constraints on bit error rate (BER), data rate, total power, as well as delay. We first develop a nonlinear optimization model, which generally requires high computation complexity. To consider a more realistic scenario, we take into account imperfect Channel State Information (CSI) due to estimation errors or channel feedback delay, and incorporate the imperfect CSI into the optimization problem formulation. We then derive the solution through a dual decomposition method. Due to the duality gap between the original and dual optimizations, we convert the non-linear optimization to an equivalent linear formulation so that an exact solution can be obtained. To further reduce the complexity, we develop a heuristic algorithm to provide a solution close to the optimum. Then, we study the notion of fairness in the context of resource allocation. In particular, cooperative game theory can be applied to OFDMA networks for fair resource allocation. We apply two cooperative games, Non-Transferable Utility (NTU) game and Transferable Utility (TU) game, to provide fairness in OFDMA networks. In NTU game, fairness is achieved by defining appropriate objective function, while in TU game, fairness is provided by forming the appropriate network structure. For NTU game, we analyze the Nash Bargaining Solution (NBS) as a solution of NTU game taking into account CSI and Queue State Information (QSI). In a TU game, we show that coalition among subcarriers to jointly provide rate requirements leads to better performance in terms of power consumpviii tion. We show that although NTU and TU games are modeled as rate adaptive and margin adaptive problems, respectively, but both solutions provide a fair distribution of resources with minimum fairness index of 0.8. Although NBS can provide fairness, the fairness is not from user perspective. In competitive fairness, which is based on auction theory, each user is responsible for his/her own action. A distributed allocation of resources in OFDMA networks is studied through auction theory. A combinatorial auction is formulated in which the users’ utility enforce the truthful resource demands. Since the original problem is NP hard, a method based on simulated annealing applied to find near-optimum results. Then, we turn our attention toward a more complicated scenario of multicell OFDMA networks. A combinatorial auction, which takes into account the interference from adjacent cells is presented. Auction objective is to minimize the interference, while power of users is limited. Due to the complexity of original problem, we apply a heuristic approach, in which the bids are ordered based on the linear programming approximation of combinatorial auction, and then local improvements are made in the order of bids. Our iterative approach along with the proposed load control scheme provides fair distribution of resources to the users, regardless of their position in the cell. Finally, we propose a comprehensive auction in OFDMA network. We present an auction framework for allocation of subcarriers, in which winner pays monitoring and entry fees, in addition to the price which he is paying for the allocated subcarrier. We prove that in our framework users will avoid bidding for the subcarriers where they have a relatively low chance of winning. We obtain optimal bidding strategy based on Bayesian Nash Equilibrium (BNE) in which users are maximizing their net profit. In a Fractional Frequency Reuse (FFR) implementation of frequency planning, we will find a focal distance which classifies the users into cell-center and cell-edge users. It is shown that the focal distance increases as the interference decreases.
PH.D in Electrical Engineering, December 2012
Show less
- Title
- INJECTION LOCKING BASED ULTRA LOW POWER RADIO FREQUENCY INTEGRATED CIRCUITS
- Creator
- Zhu, Qiang
- Date
- 2012-05-31, 2012-07
- Description
-
Recent advances in radio frequency integrated circuit (RFIC) technology enable various innovative and versatile applications through ultra-low...
Show moreRecent advances in radio frequency integrated circuit (RFIC) technology enable various innovative and versatile applications through ultra-low-power wireless link such as mesh sensor network, personal area network (PAN) and semi-active RFID. This thesis introduces energy efficient demodulator and transceiver design for wireless communications. At the receiver front end, an ultra-low-power BPSK demodulator based on injection locked oscillators (ILOs) is introduced. Two second harmonic ILOs are employed to convert BPSK signals to ASK signals, which are then demodulated by an envelope detector to baseband. For sub-GHz applications, the ILOs are implemented using ring oscillators to allow compact chip area and ultra-low power dissipation. Bit error rate (BER) analysis of this demodulator indicates erroneous polarity flipping of demodulated bits due to phase noise of the ILO. The prototype chip is fabricated in a 65nm CMOS technology that consumes 228μW of power and occupies 0.014mm2 of die area. Measurement results reveal the demodulation of 750MHz 5Mb/s differential BPSK signal with a sensitivity of -43dBm. Theoretical BER analysis has been verified with erroneous flipping observed in the measurement and its probability close to the prediction. Then, an innovative injecting locking based transceiver architecture for ultra low power operation is proposed. It applied the ILO based BPSK demodulator at the receiver side. The oscillating signal at one receiver ILO also injects to another transmitter ILO for accurate carrier generation. Thus local frequency synthesis circuit which consumes considerable portion of power in traditional transceiver is not required. This design is implemented in a 45nm CMOS SOI technology. Measurement results indicate that the transceiver achieves downlink demodulation of -35dBm BPSK signal at 5Mb/s data rate and uplink transmission of -23dBm ASK signal at 1Mb/s data rate with 0.93mA current consumption from 1V power supply.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- INCORPORATING REACTIVE POWER MARKET INTO THE DAY-AHEAD ELECTRICITY MARKET
- Creator
- Al Ghamdi, Mohammed
- Date
- 2012-05-29, 2012-07
- Description
-
The research work presented in this thesis proposes the incorporation of the reactive power market into the day-ahead electricity market in...
Show moreThe research work presented in this thesis proposes the incorporation of the reactive power market into the day-ahead electricity market in order to compensate generation companies (GENCOs) and independent power producers (IPPs) for providing any additional reactive power support, which varies on an hourly basis based on the load demand, transmission network configuration, and any contingencies that might occur. This proposal would minimize the total payment burden on the independent system operator (ISO), which is related to the reactive power dispatch. The proposed model achieves the main objective of an ISO in a competitive electricity market, which is to provide the required reactive power support from generators at minimum cost while ensuring the secure operation of the power system. In this research, the reactive power price is the bidding-based price that is submitted by the GENCOs and IPPs to the ISOs during the day-ahead market. The proposal takes into the account both the technical and economic aspects associated with the active power and reactive power dispatch in the context of the new operating paradigms in competitive electricity markets. In this research, the Security Constrained Unit Commitment (SCUC) based on AC power flow modeling is considered as the drive engine for clearing the day-ahead electricity market based on the amount of information provided by the market participants. This proposed framework would provide appropriate reactive power support from service providers at minimum cost, while ensuring the secure and reliable operation of the electrical power system. In the research, the PQ capability curves of the generating units are modeled to ensure the practically of the SCUC solutions that are obtained. This proposal would be an essential step toward a fair electricity market while increasing the security of the power system and reducing transmission congestions. Also, it would pave the road for various renewable energy resources since the penetration of renewable energy resources would impact the commitment of the generating units. This would impact the available reactive power reserve margin and security of the network. In addition, incorporating the reactive power market into the day-ahead market would provide a clear signal for optimal private investment in the reactive power capacity. The framework that has been developed is general in nature and can be used for any electricity market structure.
Ph.D. in Electrical Engineering, July 2012
Show less
- Title
- EXPLOITING NETWORK CODING IN DIFFERENT WIRELESS NETWORKS
- Creator
- Guo, Bin
- Date
- 2012-07-06, 2012-07
- Description
-
Wireless communication networks have been incorporated into our daily life and provide convenience anytime and anywhere. However, the wireless...
Show moreWireless communication networks have been incorporated into our daily life and provide convenience anytime and anywhere. However, the wireless medium is unreliable and unpredictable. Current wireless networks suffer from low throughput, low reliability, etc. Network coding, an alternative approach, has attracted more interests and has emerged as an important technology in wireless networks. It can provide significant potential throughput improvements and a high degree of robustness. This dissertation is built on the theory of network coding. In this dissertation, different network coding protocols are designed in varied wireless networks. The first part of this dissertation proposes a novel coding-ware routing protocol in wireless mesh networks. In particular, a generalized coding condition is formally established to identify the coding opportunities. Based on general coding conditions analysis, a novel routing metric FORM (Free-ride Optimal Routing Metric) and the corresponding routing protocol are developed with the objective to exploit the coding opportunities and maximize the benefit of “free-ride” in order to reduce the total number of transmissions and consequently to increase the network throughput. The results show the proposed protocol achieves significant throughput gain than existing approaches. The second part of this dissertation exploits network coding in wireless cooperative networks. Firstly, a Decode-and-Forward Network Coded (DFNC) protocol is proposed for multi-user cooperative communication system. In particular, DFNC develops an efficient construction method for coding coefficients and a novel decoding algorithm that combines network coding and channel coding. DFNC exploits both temporal and spatial diversities through multiple channels by allowing all the users to generate redundant network-coded packets in a distributed manner and it helps fully explore the redundancy provided by network coding to realize error correction. Theoretical analysis and simulation results demonstrate that DFNC outperforms other transmission schemes in terms of Symbol Error Rate (SER) and achieves higher diversity order. Secondly, the idea of DFNC is extended and Modified-DFNC (M-DFNC) is introduced for a more practical scenario: not all the users will be able to dedicate their resources to provide assistance for others. The throughput analysis shows that M-DFNC outperforms the conventional cooperative protocol in the low-SNR regime and it implies that an adaptive cooperation system should be adopted to optimize the performance. The simulation results validate the theoretical analysis.
Ph.D. in Electrical Engineering, July 2012
Show less