Search results
(1 - 17 of 17)
- Title
- POINT CLOUD FUSION BETWEEEN AERIAL AND VEHICLE LIDAR
- Creator
- Guangyao, Ma
- Date
- 2015, 2015-05
- Description
-
Because of the increasing requirement of precision in region of 3-D map, we began to use LiDAR to establish a more accurate map. There still...
Show moreBecause of the increasing requirement of precision in region of 3-D map, we began to use LiDAR to establish a more accurate map. There still exist some problems although we have already made a great progress in this area. One of them, which I tried to process during my thesis study, is that we have two points source - Aerial LiDAR Data( Points gotten by Airplane ) and Vehicle LiDAR Data( Points gotten by Vehicle ) - while both of them have a different density and cannot be merged well. This process - Fusion-is kindly similar to registration, the difference is that the points we would like to merge are generated from different devices and have only few points pairs in the same region. For example, the Aerial LiDAR data has a higher points density in the roofs and ground, but lower in the walls. In the meanwhile, the Vehicle LiDAR data has a lot of points in the walls and ground region. It is beneficial to minimize the difference between these two point sets since the process is necessary for modeling, registration and so on. Therefore, my thesis is to minimize the difference between these two data sources, a procedure of Fusion. The main idea is to read the LiDAR data into data structure of Point Cloud, sample their density to the similar level, and select several corresponding special region pairs( we named these regions -chunks, e.g. Median strip and boundaries of road ) with sufficient interesting points to do fusion. Interesting points indicate the points with one and more special features among all points. And, the algorithm we used to implement the fusion is ICP( Iterative Closet Point Algorithm). Not similar to Registration of Point Cloud, research in the Fusion area is rare. Therefore, the existing algorithms are not well suitable in this project. I deduce some new algorithms during my research since the original ICP Algorithm cannot work well. Both Update Equation and Objective Function are modified. In this thesis, PCL( Point Cloud Library ) is mainly used to implement the basic function, such as nding the nearest points and sampling point cloud, and Eigen library to write the core functions( e.g. Modified Iterative Closest Point Alg ). I also use libLAS library to implement the IO operations and MeshLab to visualize the point cloud after modification.
M.S. in Computer Science, May 2015
Show less
- Title
- COMPRESSIVE SENSING AND RECONSTRUCTION : THEORY AND APPLICATIONS
- Creator
- Krishnamurthy, Ritvik Nadig
- Date
- 2014, 2014-07
- Description
-
Conventional approach in acquisition and reconstruction of images from frequency domain strictly follow the Nyquist sampling theorem. The...
Show moreConventional approach in acquisition and reconstruction of images from frequency domain strictly follow the Nyquist sampling theorem. The principle states that the sampling frequency required for complete reconstruction of a signal is at least twice the maximum frequency of the original signal. This dissertation studies an emerging theory called Compressive Sensing or Compressive Sampling which goes against the conventional wisdom. Theoretically, it is possible to reconstruct images or signals accurately from a number of samples which is far smaller than the Nyquist samples. Compressive Sensing has proven to have farther implications than merely reducing sampling frequency of the signal. Possibility of new data acquisition methods from analog domain to digital form using fewer sensors, image acquisition using much smaller sensors array, to name a few. This novel theory combines sampling and compression methods thereby reducing the data acquisition resources, such as number of sensors, storage memory for collected samples and maximum operating frequency. This dissertation presents some insights into reconstruction of grey scale images and audio signals using OMP and CoSaMP algorithms. It also delves into some of the key mathematical insights underlying this new theory and explains some of the interactions between Compressive Sensing and related elds such as statistics, coding theory and theoretical computer science. viii
M.S. in Computer Engineering, July 2014
Show less
- Title
- EFFICIENT SCORING AND RANKING OF EXPLANATION FOR DATA EXCHANGE ERRORS IN VAGABOND
- Creator
- Wang, Zhen
- Date
- 2014, 2014-05
- Description
-
Data exchange has been widely used in big data era. One challenge for data exchange is to identify the true cause of data errors during the...
Show moreData exchange has been widely used in big data era. One challenge for data exchange is to identify the true cause of data errors during the schema translation. The huge amount of data and schemas make it nearly impossible to find “the” correct solution. Vagabond system is developed to address this problem and use best-effort methods to rank data exchange error explanations base on the likelihood that they are the correct solutions. Ranking done on scoring functions that model some aspects of explanation sets. Examples of these properties include complexity(size of explana- tion), and side effect size(number of correct data values that will be affected by the changes). The thesis introduced three new scoring functions to increase the applicability of Vagabond under various data exchange scenarios. We prove that the monotonicity property required by Vagabond may not hold for some of the new scoring functions, so a new generic ranker is also introduced to efficiently rank error explanations for these new scoring functions as well as for future scoring functions that have boundary property. We can efficiently compute upper or lower bounds on the score of partial solutions. We also completed some performance experiments on the new scoring functions and the new ranker. The experiment result proves that the new scoring functions introduced in this thesis have a scalable performance.
M.S. in Computer Science, May 2014
Show less
- Title
- REPRODUCIBLE NETWORK RESEARCH WITH A HIGH-FIDELITY SOFTWARE-DEFINED NETWORK TESTBED
- Creator
- Wu, Xiaoliang
- Date
- 2017, 2017-05
- Description
-
Network research, as an experimental science, ought to be reproducible. However, it is not a standard practice to share models, methods or...
Show moreNetwork research, as an experimental science, ought to be reproducible. However, it is not a standard practice to share models, methods or software code to support experimental evaluation and reproducibility when publishing a network research paper. In this work, we advocate reproducible networking experiments by building a unique testbed consisting of container-based network emulation and physical devices. The testbed provides a realistic and scalable platform for reproducing network research. The testbed supports large-scale network experiments using lightweight virtualization techniques and capable of running across distributed physical machines. We utilize the testbed to reproduce network experiments, and demonstrate the e↵ectiveness by comparing the results with the original published network experiments, such as Hedera, a scalable and adaptive traffic flow scheduling system in data center networks.
M.S. in Computer Science, May 2017
Show less
- Title
- A HARDWARE-IN-THE-LOOP SOFTWARE-DEFINED NETWORKING TESTING AND EVALUATION FRAMEWORK
- Creator
- Yang, Qi
- Date
- 2017, 2017-05
- Description
-
The transformation of innovative research ideas to production systems is highly dependent on the capability of performing realistic and...
Show moreThe transformation of innovative research ideas to production systems is highly dependent on the capability of performing realistic and reproducible network experiments. Simulation testbeds o↵er scalability, reproducibility but lack fidelity due to model abstraction and simplification, while physical testbeds o↵er high fidelity but lack reproducibility and often technically challenging and economically infeasible to perform large-scale experiments. In this work, we present a hybrid testbed consisting of container-based network emulation and physical devices to advocate high fidelity and reproducible networking experiments. In particular, the testbed integrates network emulators (Mininet) [5], a distributed control environment (ONOS) [1], physical switches (Pica8) and end-hosts (Raspberry Pi and commodity servers). The testbed (1) o↵ers functional fidelity through unmodified code execution on an emulated network, (2) supports large-scale network experiments using lightweight OS-level virtualization techniques and capable of running across distributed physical machines, (3) provides the topology flexibility, and (4) enhances the repeatability and reproducibility of network experiments. We validate the fidelity of the hybrid testbed through extensive experiments under di↵erent network conditions (e.g., varying topology and traffic pattern), and compare the results with the benchmark data collected on physical devices.
M.S. in Computer Science, May 2017
Show less
- Title
- SPAM DETECTION IN SOCIAL NETWORKS: A CASE STUDY OF WEIBO
- Creator
- Guo, Chang
- Date
- 2012-07-11, 2012-07
- Description
-
Online Social Network Service (OSNS) lead the fashion on internet nowadays[43]. Hundreds of millions people are using Facebook, Twitter,...
Show moreOnline Social Network Service (OSNS) lead the fashion on internet nowadays[43]. Hundreds of millions people are using Facebook, Twitter, MySpace and other similar OSNS all over the world[18]. In China, people use Sina Weibo, Tencent Weibo, Renren instead of Facebook and Twitter. With those miracle tools, people communicate with others far from them at real time. However, in the wrong hands, those virtual communicating services are vulnerable from being leveraged to spread harmful or unwelcome spam messages to large number of people instantly. Besides illegal advertisements, the even worse spam messages could mislead you to phishing websites, or malware downloading links. Your account may be compromised and be used by spammers to continue spreading the virus to your acquaintance. The fight with spammers has been over decades. Thousands of smart scholars has developed different strategies to auto filter most of the spam messages[10][12][18]. In E-mail system, the earliest platform leveraged by spammers and hackers, it is reported that 98% of the common spam e-mails could be identified[5]. But in OSNS, it is just the beginning of the fight. In this work, I present a further study on the behavior of spammers and spammer accounts in Sina Weibo, the most popular OSNS in China. From the data I collected, I learn the differences pattern between spammers and legitimate users and try to finally identify the spammers. I study a dataset of 220K user profile data and 2.1 million of their most recent posted tweets. My method could finally recognize 84.4% of the spammer account while the overall classification accuracy achieves 89.9%. Because this method does not rely on the content of messages but the structure and pattern of them, I believe this method should work well for other OSNS such as twitter as well.
M.S. in Computer Science, July 2012
Show less
- Title
- DUAL-BASED APPROXIMATION ALGORITHMS FOR MULTIPLE NETWORK DESIGN PROBLEMS
- Creator
- Grimmer, Benjamin
- Date
- 2016, 2016-05
- Description
-
We study a variety of NP-Complete network connectivity problems. Our pri- mary results come from a novel Dual-Based approach to approximating...
Show moreWe study a variety of NP-Complete network connectivity problems. Our pri- mary results come from a novel Dual-Based approach to approximating network de- sign problems with cut-based linear programming relaxations. This approach gives a 3=2-approximation to Minimum 2-Edge-Connected Spanning Subgraph that is equivalent to a previously proposed algorithm. One well-studied branch of network design models ad hoc networks where each node can either operate at high or low power. If we allow unidirectional links, we can formalize this into the problem Dual Power Assignment (DPA). Our Dual-Based approach gives a 3=2-approximation to DPA, improving the previous best known approximation of 11=7 1:57. Another standard network design problem is Minimum Strongly Con- nected Spanning Subgraph (MSCS). We propose a new problem generalizing MSCS and DPA called Star Strong Connectivity (SSC). Then we show that our Dual-Based approach achieves a 1.6-approximation ratio on SSC. As a result of our Dual-Based approximations, we prove new upper bounds on the integrality gaps of these problems. For completeness, we present a family of instances of MSCS (and thus SSC) with integrality gap approaching 4=3.
M.S. in Computer Science, May 2016
Show less
- Title
- SCALABLE INDEXING AND SEARCHING ON DISTRIBUTED FILE SYSTEMS
- Creator
- Ijagbone, Itua
- Date
- 2016, 2016-05
- Description
-
Scientific applications and other High Performance applications generate large amounts of data. It’s said that unstructured data comprises...
Show moreScientific applications and other High Performance applications generate large amounts of data. It’s said that unstructured data comprises more than 90% of the world’s information [IDC2011], and it’s growing 60% annually [Grantz2008]. The large amounts of data generated from computation leads to data been dispersed over the file system. Problems begin to exist when we need to locate these files for later use. For small amount of files this might not be an issue but as the number of files begin to grow as well as the increase in size of these files, it becomes difficult locating these files on the file system using ordinary methods like GNU Grep [8], which is commonly used in High Performance Computing and Many-Task Computing environments. It is as a result of this problem that we have chosen this thesis to tackle the problem of finding files in a distributed system environment. Our work leverages the FusionFS [1] distributed file system and the Apache Lucene [10] centralized indexing engine as a fundamental building block. We designed and implemented a distributed search interface within the FusionFS file system that makes both indexing and searching the index across a distributed system simple. We have evaluated our system up to 64 nodes, compared it with Grep, Hadoop, and Cloudera, and have shown that FusionFS’s indexing capabilities have lower overheads and faster response times.
M.S. in Computer Science, May 2016
Show less
- Title
- UNDERSTANDING VACCINATION ATTITUDES AND DETECTING SENTIMENT STIMULUS IN ONLINE SOCIAL MEDIA
- Creator
- Kadam, Mayuri
- Date
- 2017, 2017-05
- Description
-
Vaccination being one of the most important decisions for public health, has become a debatable topic with the rise in anti-vaccination...
Show moreVaccination being one of the most important decisions for public health, has become a debatable topic with the rise in anti-vaccination sentiments in recent years. Knowing that vaccines have eradicated many endemic diseases, the rise in antivaccination sentiments jeopardizes the human health by altering the vaccine decisions. Rapidly changing information sources with the increased reach of online social media provide users with a huge amount of information and misinformation. Users exposed to these media perceive the provided information and hold an attitude towards it. Being an open platform of discussions and opinion expressions, online social media provides a great source for understanding people’s behavior. We use supervised learning for understanding the flow of vaccine sentiments and analyzing the user attitudes through online social media. In this thesis, we determine the events and incidences responsible for amplifying pro-vaccination and anti-vaccination sentiments. We investigate user behaviors and important topics of interest for these users. We develop a model for predicting a new user’s attitude utilizing that user’s recent Twitter activity.
M.S. in Computer Science, May 2017
Show less
- Title
- MATRIX: MANY-TASK COMPUTING EXECUTION FABRIC FOR EXTREME SCALES
- Creator
- Rajendran, Anupam
- Date
- 2013-05-01, 2013-05
- Description
-
Scheduling large amount of jobs/tasks over large-scale distributed systems play a significant role to achieve high system utilization and...
Show moreScheduling large amount of jobs/tasks over large-scale distributed systems play a significant role to achieve high system utilization and throughput. Today’s state-of-the-art job management/scheduling systems have predominantly Master/Slaves architectures, which have inherent limitations, such as scalability issues at extreme scales (e.g. petascales and beyond) and single point failures. In designing the next-generation job management system that addresses both of these limitations, we argue that we must distribute the job scheduling and management; however, distributed job management introduces new challenges, such as non-trivial load balancing. This thesis proposes an adaptive work stealing technique to achieve distributed load balancing at extreme scales, those found in todays’ petascale systems towards tomorrow’s exascale systems. This thesis also presents the design, analysis and implementation of a distributed execution fabric called MATRIX (MAny-Task computing execution fabRIc at eXascales). MATRIX utilizes the adaptive work stealing algorithm for distributed load balancing and distributed hash tables for managing task metadata. MATRIX supports both high-performance computing (HPC) and many-task computing (MTC) workloads. We have validated it using synthetic workloads up to 4K-cores on a IBM BlueGene/P supercomputer. Results show that high efficiencies (e.g. 90%+) are possible with certain workloads. We study the performance of MATRIX in depth, including understanding the network traffic generated by the work stealing algorithm. Simulation results are presented up to 1M-node scales which show that work stealing is a scalable and efficient load balancing approach for many-core architectures to extreme-scale distributed systems.
M.S. in Computer Science, May 2013
Show less
- Title
- SPECTRUM SHARING OPPORTUNITY FOR LTE AND AIRCRAFT RADAR IN THE 4.2 - 4.4 GHZ BAND
- Creator
- Singh, Rohit
- Date
- 2017, 2017-07
- Description
-
The Federal Communications Commission (FCC) states that America is facing a spectrum crunch and there is no easy way to meet this increasing...
Show moreThe Federal Communications Commission (FCC) states that America is facing a spectrum crunch and there is no easy way to meet this increasing demand, hence spectrum sensing and sharing has gotten significant attention in the Spectrum Com- munity. Spectrum is an increasingly scarce natural resource which needs to be used to the fullest. Using modern techniques, spectrum bands can be reused such that they do not interfere with the current users in a band. There are many bands in the RF Spectrum which are underutilized and can be reused in the space-time domain. A number of bands have been recognized as candidates for spectrum sharing. In this dissertation, we consider the 4.2 − 4.4GHz band which is dedicated for used by the radar altimeter fixed on aircraft to measure their elevation above the earth’s surface. This spectrum is currently underutilized and with care can be shared with other technologies. This thesis examines the current use of this spectrum as a func- tion of time and location and presents a methodology for assessing whether harmful interference is experienced by either the incumbent radar usage or by a proposed wireless secondary broadband user. However, this band is a potential “safety of life” spectrum which is used by aircraft during landing and takeoff. Improper sharing of this band could cause interference at the radar, which would result in false attitude detection by the radar. Because of its advance technology, LTE should can be a good sharing candidate for this sensitive band. We propose sharing of this band with small cells (perhaps inside buildings) in urban and/or suburban areas, where there is a high demand for LTE and the attenuation from the environment is high enough to cause less interference at the radar altimeters. In this thesis, we propose to detect the aircraft (i.e. the altimeter radars) us- ing the Automatic Dependent Surveillance Broadcast (ADS-B) data which is broad- casted by an aircraft. This aircraft detection mechanism helps us to take intelligent sharing approaches with LTE using the space-time domain. Since the performance of the radar altimeter is safety-of-life critical, a deep understanding of co-existence between these systems is necessary to evaluate whether sharing is feasible. Given the availability of historical ADS-B data, what we believe is an appropriate analysis of Chicagoland has been done to propose implementation of a mix of Exclusion and Coordination zones in this area in the space-time domain. The novelty of this work is to develop spectrum sharing opportunities with radars which are highly transient and their locations are unpredictable due to emergency or traffic or weather. This thesis presents a method for evaluation of the potential for spectrum sharing between the ground-based LTE systems and commercial radar altimeters.
M.S. in Computer Science, July 2017
Show less
- Title
- TOWARDS THE OPTIMAL CONFIGURATION OF USING SSDS UNDER HYBRID PARALLEL I/O AND STORAGE SYSTEMS
- Creator
- Feng, Bo
- Date
- 2014, 2014-05
- Description
-
The performance gap between computing devices and storage devices is con- tinuously getting larger during the past a few decades. This issue...
Show moreThe performance gap between computing devices and storage devices is con- tinuously getting larger during the past a few decades. This issue incurs many I/O problems even in the eld of supercomputing. On one hand, the computing facilities grow very fast as well as the supercomputers are getting more powerful. In addition, the traditional storage devices, such as hard disk drives (HDD) fail to catch up the paces of this growth. On the other hand, applications, from both industries and sciences, are becoming data-intensive, meaning that I/O is highly demanded. Newly emerging non-volatile memory (NVM), such as ash-based solid state drives (SSD), becomes popular in both consuming and enterprise markets. Datacen- ters and supercomputing centers already glimpse this transition and are getting to deploy SSDs in their I/O systems but SSDs still have monetary problems compared to HDDs. Substantial work has been done using SSDs to accelerate I/O and storage systems. However, to the best of our knowledge, there remain some fundamental questions to be addressed, such as what type of storage con guration is suitable to HDD-SSD heterogeneity. Therefore, in this study, we built a high performance hybrid parallel I/O and storage simulator to simulate these con gurations. We also imple- mented an algorithm to approach an optimal con guration using SSDs under parallel I/O and storage systems. This methodology consists of tracing users' applications, analyzing users' requirements including hardware properties, and generating the con- guration suggestions. The experiments show its delity with the minimal error rate is 2% and practical scalability up to 256 processes. The result of this study can help system designers to either optimize current system or predict larger scale design of parallel systems in the future.
M.S. in Computer Science, May 2014
Show less
- Title
- RECEIVER INITIATED MAC PROTOCOL FOR WIRELESS SENSOR NETWORK
- Creator
- Duan, Sze Ching Eric
- Date
- 2012-04-30, 2012-05
- Description
-
In wireless sensor networks, wireless devices should wake up as necessary as possible to be able to communicate with neighbors and kept high...
Show moreIn wireless sensor networks, wireless devices should wake up as necessary as possible to be able to communicate with neighbors and kept high quality performance. On the other hand, it is also important that wireless devices remain sleeping as much as possible to maintain low power consumption. Thus a power efficient duty cycle mac protocol is required in wireless sensor networks. This thesis proposed a mac layer duty cycle protocol which uses receiver initiated wake up mechanism to allow device to keep their transceivers off most of the time to maintain energy efficiency. Also this report measures the energy consumption of the protocol and made comparisons with other popular mechanisms which shows the approach have successfully reduced energy consumption in compare to other popular protocols.
M.S. in Computer Science, May 2012
Show less
- Title
- SEMANTIC ONTOLOGIES FOR THE PUBLICATION OF SPECTRUM MEASUREMENT PROVENANCE
- Creator
- Faurie, Eric A.
- Date
- 2016, 2016-05
- Description
-
Measurement-based spectrum research isn’t new, but there is a renewed interest in understanding how the spectrum is being utilized. With the...
Show moreMeasurement-based spectrum research isn’t new, but there is a renewed interest in understanding how the spectrum is being utilized. With the modern prevalence of connected devices and our increasing reliance on wireless technologies, there is increasing demand for additional spectrum. The question of how to meet this demand largely depends on how the spectrum is being used today and thus a need for advanced measurement-based research has emerged. Spectrum measurement and analysis is complicated; the data is multi-dimensional and dynamic in time, space, and frequency. Signal behavior is governed by complex mathematics and its use is regulated by government agencies across the world. Data collection relies on a complex system of expensive hardware where the physical attributes of antennas, analyzers, and deployment locations all impact the data that’s collected. These variables and concerns must all be considered while deploying a Spectrum measurement system. This paper presents the Semantic Spectrum Ontology (SSO), a model which aids researchers in designing and deploying Spectrum Measurement systems and publishing their data as community resources. The SSO exists within the paradigm of the Semantic Web and links into the wider Semantic graph by extending the W3C’s Semantic Sensor Network Ontology (SSN). The Semantic Spectrum Ontology also presents two new Semantic Constructs. The Scientific Provenance Model allows researchers to publish in-depth metadata concerning the measurements and the conditions under which they were collected, and the Scientific Property Model creates a framework for encoding knowledge from various sources including domain experts and machine learning statistics. These two models were constructed specifically for the SSO but were generalized to allow for their application within any ontology representing any scientific field.
M.S. in Computer Science, May 2016
Show less
- Title
- A NEW SATISFIABILITY SOLVER OF THE FEATURE LANGUAGE EXTENSION
- Creator
- Ai, Jieling
- Date
- 2012-04-23, 2012-05
- Description
-
We introduce a satisfiability solver for first order formulas written in a modern object oriented programming language such as Java, which is...
Show moreWe introduce a satisfiability solver for first order formulas written in a modern object oriented programming language such as Java, which is the programming language that we use to implement the solver. The variables in the first order formula can be of any data type definable with the host programming language. The first application of the solver is to detect interaction conditions among programs written in the Feature Language Extensions (FLX). Therefore, it also determines the satisfying conditions of the formula if the formula is satisfiable. FLX is a set of programming language constructs designed to allow the programmer to develop interaction features as reusable program modules [25]. Interaction detection is equivalent to automating the task of finding where to make code changes if the interacting features are implemented with conventional programming languages. The solver requires that predicates in the formula contain no functional elements. This restriction should not reduce the kind of programs that can be written in the host programming language. FLX provides language support for the solver. The language constructs allow the programmer to provide semantic guidance to the solver on the data types that they define, and the compiler to enforce the standards required of the first order formula. While the first application of the solver is to analyze programs written in FLX, it should be useful to other applications which desire such a solver to process variables used in software directly.
M.S. in Computer Science, May 2012
Show less
- Title
- A COMPARATIVE STUDY OF FEATURE INTEGRATION WITH FLX AND ASPECTJ
- Creator
- Ramakrishna Reddy, Niranjana Sompura
- Date
- 2014, 2014-07
- Description
-
Feature Language Extensions (FLX) and AspectJ are two sets of programming language constructs designed to enable the programmer to modularize...
Show moreFeature Language Extensions (FLX) and AspectJ are two sets of programming language constructs designed to enable the programmer to modularize interacting features, or equivalently crosscutting concerns, that cannot be modularized with a main stream programming language. The two approaches are quite di erent. The purpose of this thesis is to compare how e ective they are in feature integration, such as whether the already developed features will need to be modi ed. The study was conducted by integrating a set of features of the familiar computer blackjack game. The blackjack game is interesting because it has features that will execute some programs of the basic game iteratively and recursively. We found that with AspectJ we need to modify existing feature code or repeating feature code under certain integration scenarios. We discuss the underlying reasons why they occur and in some cases suggest methods to overcome them.
M.S. in Computer Science, July 2014
Show less
- Title
- APPLICATION SOFTWARE DESIGN WITH THE FEATURE LANGUAGE EXTENTION
- Creator
- Maruyama, Shuichi
- Date
- 2013-04-23, 2013-05
- Description
-
When implemented with existing mainstream programming languages, the code of interacting features will inevitably entangle in the same...
Show moreWhen implemented with existing mainstream programming languages, the code of interacting features will inevitably entangle in the same reusable program unit of the programming language such as a method. Interacting features are very common in software applications. Program entanglement destroys separation of concern, making the software difficult to develop, maintain and reuse. The Feature Language Extensions (FLX) is a set of programming language constructs that allow the programmer to develop interacting features as independently reusable program modules. This thesis addresses two questions: how to design software with FLX and whether programs that can be written in a procedural language such as Java can also be written in FLX. We illustrate our results with examples from a computer blackjack game that is implemented using FLX. For the first question, we introduce a set of seven design guidelines. Some of these guidelines are introduced for good programming practices: so that there is better separation of concern and so that FLX is complementary to object oriented design. Some of them are developed so that features written following them will be reusable, and when the features are integrated with other features, they do not need to be changed. A procedural programming language such as Java has constructs that allows a programmer to specify program units to be executed in sequence, conditionally, iteratively and recursively. Previous papers had given examples on how to implement the first two types of execution flows with FLX. In this thesis, we show how to implement the other two types of execution flows.
M.S. in Computer Science, May 2013
Show less