|Jianjun Zhao||Last modified date：2021.11.03|
Professor / Advanced Software Engineering / Department of Advanced Information Technology / Faculty of Information Science and Electrical Engineering
|Jianjun Zhao||Last modified date：2021.11.03|
|1.||Haibo Yu, Qiang Sun, Kejun Xiao, Yuting Chen, Tsunenori Mine, Jianjun Zhao, Parallelizing Flow-Sensitive Demand-Driven Points-to Analysis, 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C), 10.1109/QRS-C5114.2020.00026, 91-97, 2020.12.|
|2.||Lingjun Zhou, Bing Yu, David Berend, Xiaofei Xie, Xiaohong Li, Jianjun Zhao, and Zhiyong Feng., An Empirical Study on Robustness of DNNs with Out-of-Distribution Awareness., In Proc. 27th Asia-Pacific Software Engineering Conference (APSEC 2020) (Best Paper Award) [CORE Ranking B], 266-275, 2020.12.|
|3.||Hua Qi, Qing Guo, Felix Juefei-Xu, Xiaofei Xie, Lei Ma, Wei Feng, Yang Liu, Jianjun Zhao, DeepRhythm: Exposing DeepFakes with Attentional Visual Heartbeat Rhythms, In Proc. The 28th ACM International Conference on Multimedia (ACM MM 2020) [CORE ranking A*], 4318-4327, 2020.10.|
|4.||Xiaoning Du, Yi Li, Xiaofei Xie, Lei Ma, Yang Liu, Jianjun Zhao., Marble: Model-Based Robustness Analysis of Stateful Deep Learning Systems, The 35th IEEE/ACM International Conference on Automated Software Engineering (ASE 2020) [CORE Ranking A*], 423-435, 2020.09.|
|5.||David Berend, Xiaofei Xie, Lei Ma, Lingjun Zhou, Yang Liu, Chi Xu, Jianjun Zhao, Cats Are Not Fish: Deep Learning Testing Calls for Out-Of-Distribution Awareness, The 35th IEEE/ACM International Conference on Automated Software Engineering (ASE 2020) [CORE Ranking A*], 1041-1052, 2020.09.|
|6.||Xuhong Ren, Bing Yu, Hua Qi, Felix Juefei-Xu, Zhuo Li, Wanli Xue, Lei Ma, Jianjun Zhao., Few-Shot Guided Mix for DNN Repairing, In Proc. 36th IEEE International Conference on Software Maintenance and Evolution (ICSME 2020), NIER Track, 717-721, 2020.09.|
|7.||Xiongfei Wu, Liangyu Qin, Bing Yu, Xiaofei Xie, Lei Ma, Yinxing Xue, Yang Liu, Jianjun Zhao, How are Deep Learning Models Similar? An Empirical Study on Clone Analysis of Deep Learning Software, In Proc. 28th International Conference on Program Comprehension (ICPC 2020) [CORE Ranking A], 172-183, 2020.07.|
|8.||Xiyue Zhang, Xiaofei Xie, Lei Ma, Xiaoning Du, Qiang Hu, Yang Liu, Jianjun Zhao, Meng Sun, Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty, In Proc. 42nd International Conference on Software Engineering (ICSE 2020) [CORE Ranking A*], 739-751, 2020.05.|
|9.||Gefei Zhang, Jianjun Zhao, Scenario Testing of AngularJS-Based Single Page Web Applications, International Workshop on Data Science and Knowledge Graph, DSKG 2019, 5th International Workshop on Knowledge Discovery on the Web, KDWEB 2019 and 2nd International Workshop on Maturity of Web Engineering Practices, MATWEP 2019, co-located with the 19th International Conference on Web Engineering, ICWE 2019 Current Trends in Web Engineering - ICWE 2019 International Workshops, DSKG, KDWEB, MATWEP, Proceedings, 10.1007/978-3-030-51253-8_10, 91-103, 2020.01, AngularJS is a popular framework for single page web applications. Due to separation of programming logic and GUI, the data and control flow in AngularJS applications are usually hard to track. We propose a white-box method for first integrating the separate concerns into one interaction diagram, which contains the overall data and control flow of a program, and then separating user interactions from each other. With the help of the interactions, our method helps to achieve a better understanding of AgnularJS-based single page web applications, and moreover provides novel test coverage criteria for them..|
|10.||Xiaoning Du, Xiaofei Xie, Yi Li, Lei Ma, Yang Liu, Jianjun Zhao, A quantitative analysis framework for recurrent neural network, 34th IEEE/ACM International Conference on Automated Software Engineering, ASE 2019 Proceedings - 2019 34th IEEE/ACM International Conference on Automated Software Engineering, ASE 2019, 10.1109/ASE.2019.00102, 1062-1065, 2019.11, Recurrent neural network (RNN) has achieved great success in processing sequential inputs for applications such as automatic speech recognition, natural language processing and machine translation. However, quality and reliability issues of RNNs make them vulnerable to adversarial attacks and hinder their deployment in real-world applications. In this paper, we propose a quantitative analysis framework-DeepStellar-to pave the way for effective quality and security analysis of software systems powered by RNNs. DeepStellar is generic to handle various RNN architectures, including LSTM and GRU, scalable to work on industrial-grade RNN models, and extensible to develop customized analyzers and tools. We demonstrated that, with DeepStellar, users are able to design efficient test generation tools, and develop effective adversarial sample detectors. We tested the developed applications on three real RNN models, including speech recognition and image classification. DeepStellar outperforms existing approaches three hundred times in generating defect-triggering tests and achieves 97% accuracy in detecting adversarial attacks. A video demonstration which shows the main features of DeepStellar is available at: https://sites.google.com/view/deepstellar/tool-demo..|
|11.||Qianyu Guo, Sen Chen, Xiaofei Xie, Lei Ma, Qiang Hu, Hongtao Liu, Yang Liu, Jianjun Zhao, Xiaohong Li, An empirical study towards characterizing deep learning development and deployment across different frameworks and platforms, 34th IEEE/ACM International Conference on Automated Software Engineering, ASE 2019 Proceedings - 2019 34th IEEE/ACM International Conference on Automated Software Engineering, ASE 2019, 10.1109/ASE.2019.00080, 810-822, 2019.11, Deep Learning (DL) has recently achieved tremendous success. A variety of DL frameworks and platforms play a key role to catalyze such progress. However, the differences in architecture designs and implementations of existing frameworks and platforms bring new challenges for DL software development and deployment. Till now, there is no study on how various mainstream frameworks and platforms influence both DL software development and deployment in practice. To fill this gap, we take the first step towards understanding how the most widely-used DL frameworks and platforms support the DL software development and deployment. We conduct a systematic study on these frameworks and platforms by using two types of DNN architectures and three popular datasets. (1) For development process, we investigate the prediction accuracy under the same runtime training configuration or same model weights/biases. We also study the adversarial robustness of trained models by leveraging the existing adversarial attack techniques. The experimental results show that the computing differences across frameworks could result in an obvious prediction accuracy decline, which should draw the attention of DL developers. (2) For deployment process, we investigate the prediction accuracy and performance (refers to time cost and memory consumption) when the trained models are migrated/quantized from PC to real mobile devices and web browsers. The DL platform study unveils that the migration and quantization still suffer from compatibility and reliability issues. Meanwhile, we find several DL software bugs by using the results as a benchmark. We further validate the results through bug confirmation from stakeholders and industrial positive feedback to highlight the implications of our study. Through our study, we summarize practical guidelines, identify challenges and pinpoint new research directions, such as understanding the characteristics of DL frameworks and platforms, avoiding compatibility and reliability issues, detecting DL software bugs, and reducing time cost and memory consumption towards developing and deploying high quality DL systems effectively..|
|12.||Xiaofei Xie, Hongxu Chen, Yi Li, Lei Ma, Yang Liu, Jianjun Zhao, Coverage-guided fuzzing for feedforward neural networks, 34th IEEE/ACM International Conference on Automated Software Engineering, ASE 2019 Proceedings - 2019 34th IEEE/ACM International Conference on Automated Software Engineering, ASE 2019, 10.1109/ASE.2019.00127, 1162-1165, 2019.11, Deep neural network (DNN) has been widely applied to safety-critical scenarios such as autonomous vehicle, security surveillance, and cyber-physical control systems. Yet, the incorrect behaviors of DNNs can lead to severe accidents and tremendous losses due to hidden defects. In this paper, we present DeepHunter, a general-purpose fuzzing framework for detecting defects of DNNs. DeepHunter is inspired by traditional grey-box fuzzing and aims to increase the overall test coverage by applying adaptive heuristics according to runtime feedback. Specifically, DeepHunter provides a series of seed selection strategies, metamorphic mutation strategies, and testing criteria customized to DNN testing; all these components support multiple built-in configurations which are easy to extend. We evaluated DeepHunter on two popular datasets and the results demonstrate the effectiveness of DeepHunter in achieving coverage increase and detecting real defects. A video demonstration which showcases the main features of DeepHunter can be found at https://youtu.be/s5DfLErcgrc..|
|13.||Qiang Hu, Lei Ma, Xiaofei Xie, Bing Yu, Yang Liu, Jianjun Zhao, DeepMutation++
A mutation testing framework for deep learning systems, 34th IEEE/ACM International Conference on Automated Software Engineering, ASE 2019 Proceedings - 2019 34th IEEE/ACM International Conference on Automated Software Engineering, ASE 2019, 10.1109/ASE.2019.00126, 1158-1161, 2019.11, Deep neural networks (DNNs) are increasingly expanding their real-world applications across domains, e.g., image processing, speech recognition and natural language processing. However, there is still limited tool support for DNN testing in terms of test data quality and model robustness. In this paper, we introduce a mutation testing-based tool for DNNs, DeepMutation++, which facilitates the DNN quality evaluation, supporting both feed-forward neural networks (FNNs) and stateful recurrent neural networks (RNNs). It not only enables to statically analyze the robustness of a DNN model against the input as a whole, but also allows to identify the vulnerable segments of a sequential input (e.g. audio input) by runtime analysis. It is worth noting that DeepMutation++ specially features the support of RNNs mutation testing. The tool demo video can be found on the project website https://sites.google.com/view/deepmutationpp..
|14.||Xiaoning Du, Xiaofei Xie, Yi Li, Lei Ma, Yang Liu, Jianjun Zhao, DeepStellar
Model-based quantitative analysis of stateful deep learning systems, 27th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2019 ESEC/FSE 2019 - Proceedings of the 2019 27th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 10.1145/3338906.3338954, 477-487, 2019.08, Deep Learning (DL) has achieved tremendous success in many cutting-edge applications. However, the state-of-the-art DL systems still suffer from quality issues. While some recent progress has been made on the analysis of feed-forward DL systems, little study has been done on the Recurrent Neural Network (RNN)-based stateful DL systems, which are widely used in audio, natural languages and video processing, etc. In this paper, we initiate the very first step towards the quantitative analysis of RNN-based DL systems. We model RNN as an abstract state transition system to characterize its internal behaviors. Based on the abstract model, we design two trace similarity metrics and five coverage criteria which enable the quantitative analysis of RNNs. We further propose two algorithms powered by the quantitative measures for adversarial sample detection and coverage-guided test generation. We evaluate DeepStellar on four RNN-based systems covering image classification and automated speech recognition. The results demonstrate that the abstract model is useful in capturing the internal behaviors of RNNs, and confirm that (1) the similarity metrics could effectively capture the differences between samples even with very small perturbations (achieving 97% accuracy for detecting adversarial samples) and (2) the coverage criteria are useful in revealing erroneous behaviors (generating three times more adversarial samples than random testing and hundreds times more than the unrolling approach)..
|15.||Xiaofei Xie, Lei Ma, Felix Juefei-Xu, Minhui Xue, Hongxu Chen, Yang Liu, Jianjun Zhao, Bo Li, Jianxiong Yin, Simon See, Deephunter
A coverage-guided fuzz testing framework for deep neural networks, 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2019 ISSTA 2019 - Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, 10.1145/3293882.3330579, 158-168, 2019.07, The past decade has seen the great potential of applying deep neural network (DNN) based software to safety-critical scenarios, such as autonomous driving. Similar to traditional software, DNNs could exhibit incorrect behaviors, caused by hidden defects, leading to severe accidents and losses. In this paper, we propose DeepHunter, a coverage-guided fuzz testing framework for detecting potential defects of general-purpose DNNs. To this end, we first propose a metamorphic mutation strategy to generate new semantically preserved tests, and leverage multiple extensible coverage criteria as feedback to guide the test generation. We further propose a seed selection strategy that combines both diversity-based and recency-based seed selection. We implement and incorporate 5 existing testing criteria and 4 seed selection strategies in DeepHunter. Large-scale experiments demonstrate that (1) our metamorphic mutation strategy is useful to generate new valid tests with the same semantics as the original seed, by up to a 98% validity ratio; (2) the diversity-based seed selection generally weighs more than recency-based seed selection in boosting the coverage and in detecting defects; (3) DeepHunter outperforms the state of the arts by coverage as well as the quantity and diversity of defects identified; (4) guided by corner-region based criteria, DeepHunter is useful to capture defects during the DNN quantization for platform migration..
|16.||Chao Xie, Hua Qi, Lei Ma, Jianjun Zhao, DeepVisual
A visual programming tool for deep learning systems, 27th IEEE/ACM International Conference on Program Comprehension, ICPC 2019 Proceedings - 2019 IEEE/ACM 27th International Conference on Program Comprehension, ICPC 2019, 10.1109/ICPC.2019.00028, 130-134, 2019.05, As deep learning (DL) opens the way to many technological innovations in a wild range of fields, more and more researchers and developers from diverse domains start to take advantage of DLs. In many circumstances, a developer leverages a DL framework and programs the training software in the form of source code (e.g., Python, Java). However, not all of the developers across domains are skilled at programming. It is highly desirable to provide a way so that a developer could focus on how to design and optimize their DL systems instead of spending too much time on programming. To simplify the programming process towards saving time and effort especially for beginners, we propose and implement DeepVisual, a visual programming tool for the design and development of DL systems. DeepVisual represents each layer of a neural network as a component. A user can drag-and-drop components to design and build a DL model, after which the training code is automatically generated. Moreover, DeepVisual supports to extract the neural network architecture on the given source code as input. We implement DeepVisual as a PyCharm plugin and demonstrate its usefulness on two typical use cases..
|17.||Sarfraz Khurshid, Bo Li, Yang Liu, Lei Ma, Jianjun Zhao, Message from the MLST 2019 chairs, 12th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2019 Proceedings - 2019 IEEE 12th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2019, 10.1109/ICSTW.2019.00021, XXIX, 2019.04.|
|18.||Weizhao Yuan, Hoang H. Nguyen, Lingxiao Jiang, Yuting Chen, Jianjun Zhao, Haibo Yu, API recommendation for event-driven Android application development, Information and Software Technology, 10.1016/j.infsof.2018.10.010, 107, 30-47, 2019.03, Context: Software development is increasingly dependent on existing libraries. Developers need help to find suitable library APIs. Although many studies have been proposed to recommend relevant functional APIs that can be invoked for implementing a functionality, few studies have paid attention to an orthogonal need associated with event-driven programming frameworks, such as the Android framework. In addition to invoking functional APIs, Android developers need to know where to place functional code according to various events that may be triggered within the framework. Objective: This paper aims to develop an API recommendation engine for Android application development that can recommend both (1) functional APIs for implementing a functionality and (2) the event callback APIs that are to be overridden to contain the functional code. Method: We carry out an empirical study on actual Android programming questions from StackOverflow to confirm the need of recommending callbacks. Then we build Android-specific API databases to contain the correlations among various functionalities and APIs, based on customized parsing of code snippets and natural language processing of texts in Android tutorials and SDK documents, and then textual and code similarity metrics are adapted for recommending relevant APIs. Results: We have evaluated our prototype recommendation engine, named LibraryGuru, with about 1500 questions on Android programming from StackOverflow, and demonstrated that our top-5 results on recommending callbacks and functional APIs can on estimate achieve up to 43.5% and 50.9% respectively in precision, 24.6% and 32.5% respectively in mean average precision (MAP) scores, and 51.1% and 44.0% respectively in recall. Conclusion: We conclude that it is important and possible to recommend both functional APIs and callbacks for Android application development, and future work is needed to take more data sources into consideration to make more relevant recommendations for developers’ needs..|
|19.||Lei Ma, Felix Juefei-Xu, Minhui Xue, Bo Li, Li Li, Yang Liu, Jianjun Zhao, DeepCT
Tomographic Combinatorial Testing for Deep Learning Systems, 26th IEEE International Conference on Software Analysis, Evolution, and Reengineering, SANER 2019 SANER 2019 - Proceedings of the 2019 IEEE 26th International Conference on Software Analysis, Evolution, and Reengineering, 10.1109/SANER.2019.8668044, 614-618, 2019.03, Deep learning (DL) has achieved remarkable progress over the past decade and has been widely applied to many industry domains. However, the robustness of DL systems recently becomes great concerns, where minor perturbation on the input might cause the DL malfunction. These robustness issues could potentially result in severe consequences when a DL system is deployed to safety-critical applications and hinder the real-world deployment of DL systems. Testing techniques enable the robustness evaluation and vulnerable issue detection of a DL system at an early stage. The main challenge of testing a DL system attributes to the high dimensionality of its inputs and large internal latent feature space, which makes testing each state almost impossible. For traditional software, combinatorial testing (CT) is an effective testing technique to balance the testing exploration effort and defect detection capabilities. In this paper, we perform an exploratory study of CT on DL systems. We propose a set of combinatorial testing criteria specialized for DL systems, as well as a CT coverage guided test generation technique. Our evaluation demonstrates that CT provides a promising avenue for testing DL systems..
|20.||Yang Liu, Lei Ma, Jianjun Zhao, Secure Deep Learning Engineering
A Road Towards Quality Assurance of Intelligent Systems, 21st International Conference on Formal Engineering Methods, ICFEM 2019 Formal Methods and Software Engineering - 21st International Conference on Formal Engineering Methods, ICFEM 2019, Proceedings, 10.1007/978-3-030-32409-4_1, 3-15, 2019.01, Over the past decades, deep learning (DL) systems have achieved tremendous success and gained great popularity in various applications, such as intelligent machines, image processing, speech processing, and medical diagnostics. Deep neural networks are the key driving force behind its recent success, but still seem to be a magic black box lacking interpretability and understanding. This brings up many open safety and security issues with enormous and urgent demands on rigorous methodologies and engineering practice for quality enhancement. A plethora of studies have shown that state-of-the-art DL systems suffer from defects and vulnerabilities that can lead to severe loss and tragedies, especially when applied to real-world safety-critical applications. In this paper, we perform a large-scale study and construct a paper repository of 223 relevant works to the quality assurance, security, and interpretation of deep learning. Based on this, we, from a software quality assurance perspective, pinpoint challenges and future opportunities to facilitate drawing the attention of the software engineering community towards addressing the pressing industrial demand of secure intelligent systems..
|21.||Haibo Yu, Xi Jia, Tsunenori Mine, Jianjun Zhao, Type conversion sequence recommendation based on semantic web technology, 4th IEEE SmartWorld, 15th IEEE International Conference on Ubiquitous Intelligence and Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People and Smart City Innovations, SmartWorld/UIC/ATC/ScalCom/CBDCom/IoP/SCI 2018 Proceedings - 2018 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People and Smart City Innovations, SmartWorld/UIC/ATC/ScalCom/CBDCom/IoP/SCI 2018, 10.1109/SmartWorld.2018.00076, 240-245, 2018.12, As the software systems are becoming more and more complicated, developers have an increasing dependency on code recommendation tools to assist them to fulfill their development tasks. However, the current historical-code-based recommendation methods are directly affected by the quality of the historical codes and the program-environment-information-based recommendation methods cannot provide satisfactory recommendation results for static methods because it is difficult to know all possible static members only using the program context, and even if we know all the static members, we still cannot add all of them to the entry point for search because its large number may cause a space explosion. In this paper, we propose a type conversion sequence recommendation method based on program environment information. Combing with the reachability analysis using semantic Web technology, the proposed method tries to reduce the searching entry points to solve the space explosion problem caused by the recommendation of static methods. We implemented an Eclipse plug-in based on the proposed method and conducted experiments on Tomcat source code. The experimental results showed that the proposed method can not only recommend type conversion sequences with static methods effectively, but also has a higher accuracy for the recommendation of object methods compared with the Eclipse Code Recommenders..|
|22.||Lei Ma, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Felix Juefei-Xu, Chao Xie, Li Li, Yang Liu, Jianjun Zhao, Yadong Wang, DeepMutation
Mutation Testing of Deep Learning Systems, 29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018 Proceedings - 29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018, 10.1109/ISSRE.2018.00021, 100-111, 2018.11, Deep learning (DL) defines a new data-driven programming paradigm where the internal system logic is largely shaped by the training data. The standard way of evaluating DL models is to examine their performance on a test dataset. The quality of the test dataset is of great importance to gain confidence of the trained models. Using an inadequate test dataset, DL models that have achieved high test accuracy may still lack generality and robustness. In traditional software testing, mutation testing is a well-established technique for quality evaluation of test suites, which analyzes to what extent a test suite detects the injected faults. However, due to the fundamental difference between traditional software and deep learning-based software, traditional mutation testing techniques cannot be directly applied to DL systems. In this paper, we propose a mutation testing framework specialized for DL systems to measure the quality of test data. To do this, by sharing the same spirit of mutation testing in traditional software, we first define a set of source-level mutation operators to inject faults to the source of DL (i.e., training data and training programs). Then we design a set of model-level mutation operators that directly inject faults into DL models without a training process. Eventually, the quality of test data could be evaluated from the analysis on to what extent the injected faults could be detected. The usefulness of the proposed mutation testing techniques is demonstrated on two public datasets, namely MNIST and CIFAR-10, with three DL models..
|23.||Lei Ma, Felix Juefei-Xu, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Chunyang Chen, Ting Su, Li Li, Yang Liu, Jianjun Zhao, Yadong Wang, DeepGauge
Multi-granularity testing criteria for deep learning systems, 33rd IEEE/ACM International Conference on Automated Software Engineering, ASE 2018 ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, 10.1145/3238147.3238202, 120-131, 2018.09, Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems..
|24.||Anil Kumar Karna, Yuting Chen, Haibo Yu, Hao Zhong, Jianjun Zhao, The role of model checking in software engineering, Frontiers of Computer Science, 10.1007/s11704-016-6192-0, 12, 4, 642-668, 2018.08, Model checking is a formal verification technique. It takes an exhaustively strategy to check hardware circuits and network protocols against desired properties. Having been developed for more than three decades, model checking is now playing an important role in software engineering for verifying rather complicated software artifacts. This paper surveys the role of model checking in software engineering. In particular, we searched for the related literatures published at reputed conferences, symposiums, workshops, and journals, and took a survey of (1) various model checking techniques that can be adapted to software development and their implementations, and (2) the use of model checking at different stages of a software development life cycle. We observed that model checking is useful for software debugging, constraint solving, and malware detection, and it can help verify different types of software systems, such as object- and aspect-oriented systems, service-oriented applications, web-based applications, and GUI applications including safety- and mission-critical systems. The survey is expected to help human engineers understand the role of model checking in software engineering, and as well decide which model checking technique(s) and/or tool(s) are applicable for developing, analyzing and verifying a practical software system. For researchers, the survey also points out how model checking has been adapted to their research topics on software engineering and its challenges..|
|25.||Qiang Hu, Lei Ma, Jianjun Zhao, DeepGraph
A PyCharm Tool for Visualizing and Understanding Deep Learning Models, 25th Asia-Pacific Software Engineering Conference, APSEC 2018 Proceedings - 25th Asia-Pacific Software Engineering Conference, APSEC 2018, 10.1109/APSEC.2018.00079, 628-632, 2018.07, As more and more domain specific big data become available, there comes a strong need on the fast development and deployment of deep learning (DL) systems with high quality for domain specific applications, including many safety-critical scenarios. In traditional software engineering, software visualization plays an important role to enhance developers' performance with many tools available. However, there are limited visualization supports existing for DL systems, especially in integrated development environments (IDEs) that allow a developer to visualize the source code of a deep neural network (DNN) and its graph architecture. In this paper, we propose DeepGraph, a visualization tool for visualizing and understanding a deep neural network. DeepGraph analyzes the training program to construct the graph representation of a DNN, and establishes and maintains the linkage (mapping) between the source code of the training program and its corresponding neural network architecture. We implemented DeepGraph as a PyCharm plugin and performed preliminary empirical study to demonstrate its usefulness for understanding deep nueral networks..
|26.||Anil Kumar Karna, Jinbo Du, Haihao Shen, Hao Zhong, Jiong Gong, Haibo Yu, Xiangning Ma, Jianjun Zhao, Tuning parallel symbolic execution engine for better performance, Frontiers of Computer Science, 10.1007/s11704-016-5459-9, 12, 1, 86-100, 2018.02, Symbolic execution is widely used in many code analysis, testing, and verification tools. As symbolic execution exhaustively explores all feasible paths, it is quite time consuming. To handle the problem, researchers have paralleled existing symbolic execution tools (e.g., KLEE). In particular, Cloud9 is a widely used paralleled symbolic execution tool, and researchers have used the tool to analyze real code. However, researchers criticize that tools such as Cloud9 still cannot analyze large scale code. In this paper, we conduct a field study on Cloud9, in which we use KLEE and Cloud9 to analyze benchmarks in C. Our results confirm the criticism. Based on the results, we identify three bottlenecks that hinder the performance of Cloud9: the communication time gap, the job transfer policy, and the cache management of the solved constraints. To handle these problems, we tune the communication time gap with better parameters, modify the job transfer policy, and implement an approach for cache management of solved constraints. We conduct two evaluations on our benchmarks and a real application to understand our improvements. Our results show that our tuned Cloud9 reduces the execution time significantly, both on our benchmarks and the real application. Furthermore, our evaluation results show that our tuning techniques improve the effectiveness on all the devices, and the improvement can be achieved upto five times, depending upon a tuning value of our approach and the behaviour of program under test..|
|28.||Ziyi Lin, Yilei Zhou, Hao Zhong, Yuting Chen, Haibo Yu, Jianjun Zhao, SPDebugger
A fine-grained deterministic debugger for concurrency code, IEICE Transactions on Information and Systems, 10.1587/transinf.2016EDP7388, E100D, 3, 473-482, 2017.03, When debugging bugs, programmers often prepare test cases to reproduce buggy behaviours. However, for concurrent programs, test cases alone are typically insufficient to reproduce buggy behaviours, due to the nondeterminism of multi-threaded executions. In literature, various approaches have been proposed to reproduce buggy behaviours for concurrency bugs deterministically, but to the best of our knowledge, they are still limited. In particular, we have recognized three debugging scenarios from programming practice, but existing approaches can handle only one of the scenarios. In this paper, we propose a novel approach, called SPDebugger, that provides finer-grained thread controlling over test cases, programs under test, and even third party library code, to reproduce the predesigned thread execution schedule. The evaluation shows that SPDebugger handles more debugging scenarios than the state-of-the-art tool, called IMUnit, with similar human effort..
|29.||Xiao Cheng, Zhiming Peng, Lingxiao Jiang, Hao Zhong, Haibo Yu, Jianjun Zhao, CLCMiner
Detecting Cross-Language Clones without Intermediates, IEICE Transactions on Information and Systems, 10.1587/transinf.2016EDP7334, E100D, 2, 273-284, 2017.02, The proliferation of diverse kinds of programming languages and platforms makes it a common need to have the same functionality implemented in different languages for different platforms, such as Java for Android applications and C# forWindows phone applications. Although versions of code written in different languages appear syntactically quite different from each other, they are intended to implement the same software and typically contain many code snippets that implement similar functionalities, which we call cross-language clones. When the version of code in one language evolves according to changing functionality requirements and/or bug fixes, its cross-language clones may also need be changed to maintain consistent implementations for the same functionality. Thus, it is needed to have automated ways to locate and track cross-language clones within the evolving software. In the literature, approaches for detecting cross-language clones are only for languages that share a common intermediate language (such as the .NET language family) because they are built on techniques for detecting single-language clones. To extend the capability of cross-language clone detection to more diverse kinds of languages, we propose a novel automated approach, CLCMiner, without the need of an intermediate language. It mines such clones from revision histories, based on our assumption that revisions to different versions of code implemented in different languages may naturally reflect how programmers change cross-language clones in practice, and that similarities among the revisions (referred to as clones in diffs or diff clones) may indicate actual similar code. We have implemented a prototype and applied it to ten open source projects implementations in both Java and C#. The reported clones that occur in revision histories are of high precisions (89% on average) and recalls (95% on average). Compared with token-based code clone detection tools that can treat code as plain texts, our tool can detect significantly more cross-language clones. All the evaluation results demonstrate the feasibility of revision-history based techniques for detecting cross-language clones without intermediates and point to promising future work..
|30.||Xiao Cheng, Zhiming Peng, Linxiao Jiang, Hao Zhong, Haibo Yu, Jianjun Zhao, Detecting Cross-Language Clones Without Intermediates, The 31th IEEE/ACM Conference on Automated Software Engineering (ASE 2016) (Short Paper), 696-701, 2016.09.|
|31.||Xiao Cheng, Lingxiao Jiang, Hao Zhong, Haibo Yu, Jianjun Zhao, On the feasibility of detecting cross-platform code clones via identifier similarity, 5th International Workshop on Software Mining, SoftwareMining 2016 - co-located with ASE 2016 SoftwareMining 2016 - Proceedings of the 5th International Workshop on Software Mining, co-located with ASE 2016, 10.1145/2975961.2975967, 39-42, 2016.09, More and more mobile applications run on multiple mobile operating systems to attract more users of different platforms. Although versions on different platforms are implemented in different programming languages (e.g., Java and Objective-C), there must be many code snippets that implement the similar business logic on different platforms. Such code snippets are called cross-platform clones. It is challenging but essential to detect such clones for software maintenance. Due to the practice that developers usually use some common identifiers when implementing the same business logic on different platforms, in this paper, we investigate the identifier similarity of the same mobile application on different platforms and provide insights about the feasibility of cross-platform clone detection via identifier similarity. In our experiment, we have analyzed the source code of 18 open-source cross-platform applications which are implemented on Android, iOS and Windows Phone, and find that the smaller KL-Divergence the application has, the more accurate the clones detected by identifiers will be..|
|32.||Ziyi Lin, Hao Zhong, Yuting Chen, Jianjun Zhao, LockPeeker
Detecting latent locks in Java APIs, 31st IEEE/ACM International Conference on Automated Software Engineering, ASE 2016 ASE 2016 - Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering, 10.1145/2970276.2970355, 368-378, 2016.08, Detecting lock-related defects has long been a hot research topic in software engineering. Many efforts have been spent on detecting such deadlocks in concurrent software systems. However, latent locks may be hidden in application programming interface (API) methods whose source code may not be accessible to developers. Many APIs have latent locks. For example, our study has shown that J2SE alone can have 2,000+ latent locks. As latent locks are less known by developers, they can cause deadlocks that are hard to perceive or diagnose. Meanwhile, the state-of-the-art tools mostly handle API methods as black boxes, and cannot detect deadlocks that involve such latent locks. In this paper, we propose a novel black-box testing approach, called LockPeeker, that reveals latent locks in Java APIs. The essential idea of LockPeeker is that latent locks of a given API method can be revealed by testing the method and summarizing the locking effects during testing execution. We have evaluated LockPeeker on ten real-world Java projects. Our evaluation results show that (1) LockPeeker detects 74.9% of latent locks in API methods, and (2) it enables state-of-the-art tools to detect deadlocks that otherwise cannot be detected..
|33.||Xiao Cheng, Zhiming Peng, Lingxiao Jiang, Hao Zhong, Haibo Yu, Jianjun Zhao, Mining revision histories to detect cross-language clones without intermediates, 31st IEEE/ACM International Conference on Automated Software Engineering, ASE 2016 ASE 2016 - Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering, 10.1145/2970276.2970363, 696-701, 2016.08, To attract more users on different platforms, many projects release their versions in multiple programming languages (e.g., Java and C#). They typically have many code snippets that implement similar functionalities, i.e., cross-language clones. Programmers often need to track and modify crosslanguage clones consistently to maintain similar functionalities across different language implementations. In literature, researchers have proposed approaches to detect crosslanguage clones, mostly for languages that share a common intermediate language (such as the .NET language family) so that techniques for detecting single-language clones can be applied. As a result, those approaches cannot detect cross-language clones for many projects that are not implemented in a .NET language. To overcome the limitation, in this paper, we propose a novel approach, CLCMiner, that detects cross-language clones automatically without the need of an intermediate language. Our approach mines such clones from revision histories, which reect how programmers maintain cross-language clones in practice. We have implemented a prototype tool for our approach and conducted an evaluation on five open source projects that have versions in Java and C#. The results show that CLCMiner achieves high accuracy and point to promising future work..|
|34.||Lei Ma, Cheng Zhang, Bing Yu, Jianjun Zhao, Retrofitting automatic testing through library tests reusing, 24th IEEE International Conference on Program Comprehension, ICPC 2016 Proceedings of the 24th IEEE International Conference on Program Comprehension, ICPC 2016 - co-located with ICSE 2016, 10.1109/ICPC.2016.7503725, 2016.07, Test cases are useful for program comprehension. Developers often understand dynamic behavior of systems by running their test cases. As manual testing is expensive, automatic testing has been extensively studied to reduce the cost. However, without sufficient knowledge of the software under test, it is difficult for automated testing techniques to create effective test cases, especially for software that requires complex inputs. In this paper, we propose to reuse existing test cases from the libraries of software under test, to generate better test cases. We have the observation that, when developers start to test the target software, the test cases of its dependent libraries are often available. Therefore, we propose to perform program analysis on these artifacts to extract relevant code fragments to create test sequences. We further seed these sequences to a random test generator GRT to generate test cases for target software. The preliminary experiments show that the technique significantly improves the effectiveness of GRT. Our in-depth analysis reveals that several dependency metrics are good indicators of the potential benefits of applying our technique on specific programs and their libraries..|
|35.||Xiao Cheng, Hao Zhong, Yuting Chen, Zhenjiang Hu, Jianjun Zhao, Rule-directed code clone synchronization, 24th IEEE International Conference on Program Comprehension, ICPC 2016 Proceedings of the 24th IEEE International Conference on Program Comprehension, ICPC 2016 - co-located with ICSE 2016, 10.1109/ICPC.2016.7503722, 2016.07, Code clones are prevalent in software systems due to many factors in software development. Detecting code clones and managing consistency between them along code evolution can be very useful for reducing clone-related bugs and maintenance costs. Despite some early attempts at detecting code clones and managing the consistency between them, the state-of-the-art tool can only handle simple code clones whose structures are identical or quite similar. However, existing empirical studies show that clones can have quite different structures with their evolution, which can easily go beyond the capability of the state-of-the-art tool. In this paper, we propose CCSync, a novel, rule-directed approach, which paves the structure differences between the code clones and synchronizes them even when code clones become quite different in their structures. The key steps of this approach are, given two code clones, to (1) extract a synchronization rule from the relationship between the clones, and (2) once one code fragment is updated, propagate the modifications to the other following the synchronization rule. We have implemented a tool for CCSync and evaluated its effectiveness on five Java projects. Our results shows that there are many code clones suitable for synchronization, and our tool achieves precisions of up to 92% and recalls of up to 84%. In particular, more than 76% of our generated revisions are identical with manual revisions..|
|36.||Yuting Chen, Ting Su, Chengnian Sun, Zhendong Su, Jianjun Zhao, Coverage-Directed differential testing of JVM implementations, 37th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2016 PLDI 2016 - Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation, 10.1145/2908080.2908095, 85-99, 2016.06, Java virtual machine (JVM) is a core technology, whose reliability is critical. Testing JVM implementations requires painstaking effort in designing test classfiles (∗.class) along with their test oracles. An alternative is to employ binary fuzzing to differentially test JVMs by blindly mutating seeding classfiles and then executing the resulting mutants on different JVM binaries for revealing inconsistent behaviors. However, this blind approach is not cost effective in practice because most of the mutants are invalid and redundant. This paper tackles this challenge by introducing classfuzz, a coverage-directed fuzzing approach that focuses on representative classfiles for differential testing of JVMs' startup processes. Our core insight is to (1) mutate seeding classfiles using a set of predefined mutation operators (mutators) and employ Markov Chain Monte Carlo (MCMC) sampling to guide mutator selection, and (2) execute the mutants on a reference JVM implementation and use coverage uniqueness as a discipline for accepting representative ones. The accepted classfiles are used as inputs to differentially test different JVM implementations and find defects. We have implemented classfuzz and conducted an extensive evaluation of it against existing fuzz testing algorithms. Our evaluation results show that classfuzz can enhance the ratio of discrepancy-triggering classfiles from 1:7% to 11:9%. We have also reported 62 JVM discrepancies, along with the test classfiles, to JVM developers. Many of our reported issues have already been confirmed as JVM defects, and some even match recent clarifications and changes to the Java SE 8 edition of the JVM specification..|
|37.||Xiao Cheng, Yuting Chen, Zhenjiang Hu, Tao Zan, Mengyu Liu, Hao Zhong, Jianjun Zhao, Supporting Selective Undo for Refactoring, The 23rd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER 2016), 13-23, 2016.03.|
|39.||Fei Lv, Hongyu Zhang, Jian Guang Lou, Shaowei Wang, Dongmei Zhang, Jianjun Zhao, CodeHow
Effective code search based on api understanding and extended boolean model, 30th IEEE/ACM International Conference on Automated Software Engineering, ASE 2015 Proceedings - 2015 30th IEEE/ACM International Conference on Automated Software Engineering, ASE 2015, 10.1109/ASE.2015.42, 260-270, 2016.01, Over the years of software development, a vast amount of source code has been accumulated. Many code search tools were proposed to help programmers reuse previously-written code by performing free-text queries over a large-scale codebase. Our experience shows that the accuracy of these code search tools are often unsatisfactory. One major reason is that existing tools lack of query understanding ability. In this paper, we propose CodeHow, a code search technique that can recognize potential APIs a user query refers to. Having understood the potentially relevant APIs, CodeHow expands the query with the APIs and performs code retrieval by applying the Extended Boolean model, which considers the impact of both text similarity and potential APIs on code search. We deploy the backend of CodeHow as a Microsoft Azure service and implement the front-end as a Visual Studio extension. We evaluate CodeHow on a large-scale codebase consisting of 26K C# projects downloaded from GitHub. The experimental results show that when the top 1 results are inspected, CodeHow achieves a precision score of 0.794 (i.e., 79.4% of the first returned results are relevant code snippets). The results also show that CodeHow outperforms conventional code search tools. Furthermore, we perform a controlled experiment and a survey of Microsoft developers. The results confirm the usefulness and effectiveness of CodeHow in programming practices..
|40.||Ziyi Lin, Darko Marinov, Hao Zhong, Yuting Chen, Jianjun Zhao, JaConTeBe
A benchmark suite of real-world Java concurrency bugs, 30th IEEE/ACM International Conference on Automated Software Engineering, ASE 2015 Proceedings - 2015 30th IEEE/ACM International Conference on Automated Software Engineering, ASE 2015, 10.1109/ASE.2015.87, 178-189, 2016.01, Researchers have proposed various approaches to detect concurrency bugs and improve multi-threaded programs, but performing evaluations of the effectiveness of these approaches still remains a substantial challenge. We survey the existing evaluations and find out that they often use code or bugs not representative of real world. To improve representativeness, we have prepared JaConTeBe, a benchmark suite of 47 confirmed concurrency bugs from 8 popular open-source projects, supplemented with test cases for reproducing buggy behaviors. Running three approaches on JaConTeBe shows that our benchmark suite confirms some limitations of the three approaches. We submitted JaConTeBe to the SIR repository (a software-artifact repository for rigorous controlled experiments), and it was included as a part of SIR..
|41.||Xi Chang, Zhuo Zhang, Peng Zhang, Jianxin Xue, Jianjun Zhao, BIFER
a biphasic trace filter approach to scalable prediction of concurrency errors, Frontiers of Computer Science, 10.1007/s11704-015-4334-4, 9, 6, 944-955, 2015.10, Predictive trace analysis (PTA), a static trace analysis technique for concurrent programs, can offer powerful capability support for finding concurrency errors unseen in a previous program execution. Existing PTA techniques always face considerable challenges in scaling to large traces which contain numerous critical events. One main reason is that an analyzed trace includes not only redundant memory accessing events and threads that cannot contribute to discovering any additional errors different from the found candidate ones, but also many residual synchronization events which still affect PTA to check whether these candidate ones are feasible or not even after removing the redundant events. Removing them from the trace can significantly improve the scalability of PTA without affecting the quality of the PTA results. In this paper, we propose a biphasic trace filter approach, BIFER in short, to filter these redundant events and residual events for improving the scalability of PTA to expose general concurrency errors. In addition, we design a model which indicates the lock history and the happens-before history of each thread with two kinds of ways to achieve the efficient filtering. We implement a prototypical tool BIFER for Java programs on the basis of a predictive trace analysis framework. Experiments show that BIFER can improve the scalability of PTA during the process of analyzing all of the traces..
|42.||Christoph Bockisch, Marnix Van Riet, Haihan Yin, Mehmet Aksit, Ziyi Lin, Yuting Chen, Jianjun Zhao, Trace-based debugging for advanced-dispatching programming languages, 10th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems, ICOOOLPS 2015 Proceedings of the 10th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems, ICOOOLPS 2015, 10.1145/2843915.2843922, 2015.07, Advanced-dispatching programming languages allow to implicitly alter the behaviour of a program, depending on runtime program context. While this improves modularity, it also impedes comprehensibility. The use of advanced-dispatching programming languages can give rise to complex debugging scenarios, which cannot efficiently be resolved with traditional debugging approaches such as breakpoint-based debugging. Therefore, tool support for analysing the full history of runtime behaviour of advanced-dispatching programs is of importance for efficient debugging. In this paper, we characterise debugging scenarios for which existing work does not apply well, because they require access to the program execution history for efficient resolution. We present our design and implementation of a trace-based debugger for advanced-dispatching that supports such debugging scenarios efficiently. Our approach is the first one based on an XML-representation of the execution trace, giving rise to using very powerful standard tools such as the XQuery language for searching and navigating the trace, or scalable visualisations such as tree-maps..|
|43.||Yu Ting Chen, Wei Yang, Jianjun Zhao, Program differencing for X10, Jisuanji Xuebao/Chinese Journal of Computers, 10.3724/SP.J.1016.2015.01082, 38, 5, 1082-1092, 2015.05, Program differencing is a widely used technique for program debugging, while it is still not easily used for parallel programs. One main reason is that a parallel program can be complex, and some mechanisms (e.g., place, activity, clock, and barrier) also set barriers for program differencing. In this paper we focus on program differencing for X10 parallel programming language, and design an algorithm for differencing of X10 programs. The algorithm contains three steps: (1) match the places, classes, interfaces, methods, and places between programs of two versions; (2) construct the extended program diagrams for the programs and simplify them to simplified diagrams; (3) iteratively unfold and compare the simplified diagrams and identify the differences between the programs..|
|44.||Xi Chang, Zhuo Zhang, Yan Lei, Jianjun Zhao, Biped
Bidirectional prediction of order violations, IEICE Transactions on Information and Systems, 10.1587/transinf.2014EDP7347, E98D, 2, 334-345, 2015.02, Concurrency bugs do significantly affect system reliability. Although many efforts have been made to address this problem, there are still many bugs that cannot be detected because of the complexity of concurrent programs. Compared with atomicity violations, order violations are always neglected. Efficient and effective approaches to detecting order violations are therefore in urgent need. This paper presents a bidirectional predictive trace analysis approach, BIPED, which can detect order violations in parallel based on a recorded program execution. BIPED collects an expected-order execution trace into a layered bidirectional prediction model, which intensively represents two types of expected-order data flows in the bottom layer and combines the lock sets and the bidirectionally order constraints in the upper layer. BIPED then recognizes two types of candidate violation intervals driven by the bottom-layer model and then checks these recognized intervals bidirectionally based on the upper-layer constraint model. Consequently, concrete schedules can be generated to expose order violation bugs. Our experimental results show that BIPED can effectively detect real order violation bugs and the analysis speed is 2.3x-10.9x and 1.24x-1.8x relative to the state-of-the-art predictive dynamic analysis approaches and hybrid model based static prediction analysis approaches in terms of order violation bugs..
|45.||Qiang Sun, Yuting Chen, Jianjun Zhao, A constraint-weaving approach to points-to analysis for AspectJ, Frontiers of Computer Science, 10.1007/s11704-013-3106-2, 8, 1, 52-68, 2014.02, Points-to analysis is a static code analysis technique that establishes the relationships between variables of references and allocated objects. A number of points-to analysis algorithms have been proposed for procedural and object-oriented languages like C and Java, while few of them can be used for AspectJ as we know so far. One main reason is that AspectJ is an aspect-oriented language which implements the separation of crosscutting concerns by advices, pointcuts, and inter-type declarations, while a points-to analysis of AspectJ programs may be imprecise because any aspect woven into the base code may change the points-to relations in the program and thus a conservative analysis has to be taken in order to handle the aspects. In this paper, we propose a context-sensitive points-to analysis technique called AJPoints for AspectJ. Similar to the weaving mechanism for AspectJ, AJPoints obtains the constraints and templates on the points-to relations for the base code and the aspects, respectively, but weaves and solves them in an iterative manner in order to cross the boundary between the base code and the aspects. We have implemented AJPoints on abc AspectJ compiler and evaluated it by using twelve AspectJ benchmark programs. The experimental results show that our technique can achieve a high precision about points-to relations in AspectJ programs..|
|46.||Ting Su, Geguang Pu, Bin Fang, Jifeng He, Jun Yan, Siyuan Jiang, Jianjun Zhao, Automated coverage-driven test data generation using dynamic symbolic execution, 8th International Conference on Software Security and Reliability, SERE 2014 Proceedings - 8th International Conference on Software Security and Reliability, SERE 2014, 10.1109/SERE.2014.23, 98-107, 2014.01, Recently code transformations or tailored fitness functions are adopted to achieve coverage (structural or logical criterion) driven testing to ensure software reliability. However, some internal threats like negative impacts on underlying search strategies or local maximum exist. So we propose a dynamic symbolic execution (DSE) based framework combined with a path filtering algorithm and a new heuristic path search strategy, i.e., predictive path search, to achieve faster coverage-driven testing with lower testing cost. The empirical experiments (three open source projects and two industrial projects) show that our approach is effective and efficient. For the open source projects w.r.t branch coverage, our approach in average reduces 25.5% generated test cases and 36.3% solved constraints than the traditional DSE-based approach without path filtering. And the presented heuristic strategy, on the same testing budget, improves the branch coverage by 26.4% and 35.4% than some novel search strategies adopted in KLEE and CREST..|
|48.||Qiang Sun, Yuting Chen, Jianjun Zhao, Constraint-based locality analysis for X10 programs, ACM SIGPLAN 2013 Workshop on Partial Evaluation and Program Manipulation, PEPM 2013 - Co-located with POPL 2013 PEPM 2013 - Proceedings of the ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation, Co-located with POPL 2013, 10.1145/2426890.2426915, 137-146, 2013.02, X10 is a HPC (High Performance Computing) programming language proposed by IBMfor supporting a PGAS (Partitioned Global Address Space) programming model offering a shared address space. The address space can be further partitioned into several logical locations where objects and activities (or threads) will be dynamically created. An analysis of locations can help to check the safety of object accesses through exploring which objects and activities may reside in which locations, while in practice the objects and activities are usually designated at runtime and their locations may also vary under different environments. In this paper, we propose a constraint-based locality analysis method called Leopard for X10. Leopard calculates the points-to relations for analyzing the objects and activities in a program and uses a place constraint graph to analyze their locations. We have developed a tool to support Leopard, and conducted an experiment to evaluate its effectiveness and efficiency. The experimental results show that Leopard can calculate the locations of objects and activities precisely..|
|49.||Jingxuan Tu, Lin Chen, Yuming Zhou, Jianjun Zhao, Baowen Xu, Leveraging method call anomalies to improve the effectiveness of spectrum-based fault localization techniques for object-oriented programs, 12th International Conference on Quality Software, QSIC 2012 Proceedings - 12th International Conference on Quality Software, QSIC 2012, 10.1109/QSIC.2012.30, 1-8, 2012.11, Spectrum-based fault localization (S FL) is a lightweight automated diagnosis technique. However, when applied to object-oriented programs, its diagnostic accuracy is limited as suspicious statements are distributed in different classes. In th is paper, we propose an approach to leveraging method call anomalies to improve the effectiveness of SFL techniques for locating faulty statements in an object-oriented program. First, we compute the suspiciousness for each class based on the difference in its method call sequences between passed and failed runs. Then, we use the suspiciousness information of classes to refine SFL ranks in order to enhance their fault localization effectiveness of object-oriented software. The empirical results show that the proposed approach is able to improve the effectiveness of SFL techniques..|
|50.||Cheng Zhang, Longwen Lu, Hucheng Zhou, Jianjun Zhao, Zheng Zhang, MoonBox
Debugging with online slicing and dryrun, Asia-Pacific Workshop on Systems, APSYS 2012 Proceedings of the Asia-Pacific Workshop on Systems, APSYS'12, 10.1145/2349896.2349908, 2012.10, Efficient tools are indispensable in the battle against software bugs. In this short paper, we introduce two techniques that target different phases of an interactive and iterative debugging session. To make slice-assisted log analysis practical to help fault diagnosis, slicing itself must be done instantaneously. We split the costly slicing computation into online and offline, and employ incremental updates after program edits. The result is a vast reduction of slicing cost. For the benchmarks we tested, slices can be computed in the range of seconds, which is 0.02%-6.5% of the unmodified slicing algorithm. The possibility of running slicing in situ and with instant response time gives rise to the possibility of editing-time validation, which we call dryrun. The idea is that a pair of slices, one forward from root cause and one backward from the bug site, defines the scope to validate a fix. This localization makes it possible to invoke symbolic execution and constraint solving that are otherwise too expensive to use in an interactive debugging environment..
|51.||Cheng Zhang, Juyuan Yang, Yi Zhang, Jing Fan, Xin Zhang, Jianjun Zhao, Peizhao Ou, Automatic parameter recommendation for practical API usage, 34th International Conference on Software Engineering, ICSE 2012 Proceedings - 34th International Conference on Software Engineering, ICSE 2012, 10.1109/ICSE.2012.6227136, 826-836, 2012.07, Programmers extensively use application programming interfaces (APIs) to leverage existing libraries and frameworks. However, correctly and efficiently choosing and using APIs from unfamiliar libraries and frameworks is still a non-trivial task. Programmers often need to ruminate on API documentations (that are often incomplete) or inspect code examples (that are often absent) to learn API usage patterns. Recently, various techniques have been proposed to alleviate this problem by creating API summarizations, mining code examples, or showing common API call sequences. However, few techniques focus on recommending API parameters. In this paper, we propose an automated technique, called Precise, to address this problem. Differing from common code completion systems, Precise mines existing code bases, uses an abstract usage instance representation for each API usage example, and then builds a parameter usage database. Upon a request, Precise queries the database for abstract usage instances in similar contexts and generates parameter candidates by concretizing the instances adaptively. The experimental results show that our technique is more general and applicable than existing code completion systems, specially, 64% of the parameter recommendations are useful and 53% of the recommendations are exactly the same as the actual parameters needed. We have also performed a user study to show our technique is useful in practice..|
|52.||Longwen Lu, Cheng Zhang, Jianjun Zhao, Soot-based implementation of a demand-driven reaching definitions analysis, ACM SIGPLAN International Workshop on State of the Art in Java Program Analysis, SOAP 2012 Proceedings of the ACM SIGPLAN International Workshop on State of the Art in Java Program Analysis, SOAP 2012, 10.1145/2259051.2259055, 21-26, 2012.07, As a classical data-flow analysis, reaching definitions analysis is the corner stone of various techniques, such as code optimization and program slicing. The built-in data-flow analysis framework in Soot has been implemented in the traditional iterative style. While being able to meet general requirements for implementation of data flow analyses, the framework may be less efficient for a certain type of analyses in which the complete data-flow solution is unnecessary. In this paper, we introduce our Soot-based implementation of an inter-procedural demand-driven reaching definitions analysis. For a demand for reaching definitions facts, the analysis only explores relevant program points and variables, saving a considerable amount of computation. Preliminary results show that the implementation can be much more efficient than its traditional counterpart in several scenarios. The Soot framework has greatly facilitated the implementation by providing abundant basic analysis results via well designed APIs..|
|53.||Yu Ming Zhou, Hareton Leung, Qin Bao Song, Jian Jun Zhao, Hong Min Lu, Lin Chen, Bao Wen Xu, An in-depth investigation into the relationships between structural metrics and unit testability in object-oriented systems, Science China Information Sciences, 10.1007/s11432-012-4745-x, 55, 12, 2800-2815, 2012.01, There is a common belief that structural properties of classes are important factors to determine their unit testability. However, few empirical studies have been conducted to examine the actual impact of structural properties of classes. In this paper, we employ multiple linear regression (MLR) and partial least square regression (PLSR) to investigate the relationships between the metrics measuring structural properties and unit testability of a class. The investigated structural metrics cover five property dimensions, including size, cohesion, coupling, inheritance, and complexity. Our results from open-source software systems show that: (1) most structural metrics are statistically related to unit testability in an expected direction, among which size, complexity, and coupling metrics are the most important predictors; that (2) multivariate regression models based on structural metrics cannot accurately predict unit testability of classes, although they are better able to rank unit testability of classes; that (3) the transition from MLR to PLSR could significantly improve the ability to rank unit testability of classes but cannot improve the ability to predict the unit testing effort of classes..|
|54.||Cheng Zhang, Hao Xu, Sai Zhang, Jianjun Zhao, Yuting Chen, Frequency estimation of virtual call targets for object-oriented programs, 25th European Conference on Object-Oriented Programming, ECOOP 2011 ECOOP 2011 - Object-Oriented Programming - 25th European Conference, Proceedings, 10.1007/978-3-642-22655-7_24, 510-532, 2011.08, The information of execution frequencies of virtual call targets is valuable for program analyses and optimizations of object-oriented programs. However, to obtain this information, most of the existing approaches rely on dynamic profiling. They usually require running the programs with representative workloads, which are often absent in practice. Additionally, some kinds of programs are very sensitive to run-time disturbance, thus are generally not suitable for dynamic profiling. Therefore, a technique which can statically estimate the execution frequencies of virtual call targets will be very useful. In this paper we propose an evidence-based approach to frequency estimation of virtual call targets. By applying machine learning algorithms on the data collected from a group of selected programs, our approach builds an estimation model to capture the relations between static features and run-time program behaviors. Then, for a new program, the approach estimates the relative frequency for each virtual call target by applying the model to the static features of the program. Once the model has been built, the estimation step is purely static, thus does not suffer the shortcomings of existing dynamic techniques. We have performed a number of experiments on real-world large-scale programs to evaluate our approach. The results show that our approach can estimate frequency distributions which are much more informative than the commonly used uniform distribution..|
|55.||Qiang Sun, Jianjun Zhao, Yuting Chen, Probabilistic points-to analysis for Java, 20th International Conference on Compiler Construction, CC 2011, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2011 Compiler Construction - 20th International Conference, CC 2011, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2011, Proceedings, 10.1007/978-3-642-19861-8_5, 62-81, 2011.04, Probabilistic points-to analysis is an analysis technique for defining the probabilities on the points-to relations in programs. It provides the compiler with some optimization chances such as speculative dead store elimination, speculative redundancy elimination, and speculative code scheduling. Although several static probabilistic points-to analysis techniques have been developed for C language, they cannot be applied directly to Java because they do not handle the classes, objects, inheritances and invocations of virtual methods. In this paper, we propose a context-insensitive and flow-sensitive probabilistic points-to analysis for Java (JPPA) for statically predicting the probability of points-to relations at all program points (i.e., points before or after statements) of a Java program. JPPA first constructs an interprocedural control flow graph (ICFG) for a Java program, whose edges are labeled with the probabilities calculated by an algorithm based on a static branch prediction approach, and then calculates the probabilistic points-to relations of the program based upon the ICFG. We have also developed a tool called Lukewarm to support JPPA and conducted an experiment to compare JPPA with a traditional context-insensitive and flow-sensitive points-to analysis approach. The experimental results show that JPPA is a precise and effective probabilistic points-to analysis technique for Java..|
|56.||Cheng Zhang, Zhenyu Guo, Ming Wu, Longwen Lu, Yu Fan, Jianjun Zhao, Zheng Zhang, AutoLog
Facing log redundancy and insufficiency, 2nd Asia-Pacific Workshop on Systems, APSys'11 Proceedings of the 2nd Asia-Pacific Workshop on Systems, APSys'11, 10.1145/2103799.2103811, 2011, Logs are valuable for failure diagnosis and software debugging in practice. However, due to the ad-hoc style of inserting logging statements, the quality of logs can hardly be guaranteed. In case of a system failure, the log file may contain a large number of irrelevant logs, while crucial clues to the root cause may still be missing. In this paper, we present an automated approach to log improvement based on the combination of information from program source code and textual logs. It selects the most relevant ones from an ocean of logs to help developers focus and reason along the causality chain, and generates additional informative logs to help developers discover the root causes of failures. We have conducted a preliminary case study using an implementation prototype to demonstrate the usefulness of our approach..
|57.||Haihao Shen, Jianhong Fang, Jianjun Zhao, EFindBugs
Effective error ranking for FindBugs, 4th IEEE International Conference on Software Testing, Verification, and Validation, ICST 2011 Proceedings - 4th IEEE International Conference on Software Testing, Verification, and Validation, ICST 2011, 10.1109/ICST.2011.51, 299-308, 2011, Static analysis tools have been widely used to detect potential defects without executing programs. It helps programmers raise the awareness about subtle correctness issues in the early stage. However, static defect detection tools face the high false positive rate problem. Therefore, programmers have to spend a considerable amount of time on screening out real bugs from a large number of reported warnings, which is time-consuming and inefficient. To alleviate the above problem during the report inspection process, we present EFindBugs to employ an effective two-stage error ranking strategy that suppresses the false positives and ranks the true error reports on top, so that real bugs existing in the programs could be more easily found and fixed by the programmers. In the first stage, EFindBugs initializes the ranking by assigning predefined defect likelihood for each bug pattern and sorting the error reports by the defect likelihood in descending order. In the second stage, EFindbugs optimizes the initial ranking self-adaptively through the feedback from users. This optimization process is executed automatically and based on the correlations among error reports with the same bug pattern. Our experiment on three widely-used Java projects (AspectJ, Tomcat, and Axis) shows that our ranking strategy outperforms the original ranking in Find Bugs in terms of precision, recall and F1-score..
|58.||Cheng Zhang, Dacong Yan, Jianjun Zhao, Yuting Chen, Shengqian Yang, BPGen
An automated breakpoint generator for debugging, 32nd ACM/IEEE International Conference on Software Engineering, ICSE 2010 ICSE 2010 - Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering, 10.1145/1810295.1810351, 271-274, 2010.07, During debugging processes, breakpoints are frequently used to inspect and understand runtime behaviors of programs. Although most development environments offer convenient breakpoint facilities, the use of these environments usually requires considerable human efforts in order to generate useful breakpoints. Before setting breakpoints or typing breakpoint conditions, developers usually have to make some judgements and hypotheses on the basis of their observations and experience. To reduce this kind of efforts we present a tool, named BPGen, to automatically generate breakpoints for debugging. BPGen uses three well-known dynamic fault localization techniques in tandem to identify suspicious program statements and states, through which both conditional and unconditional breakpoints are generated. BPGen is implemented as an Eclipse plugin for supplementing the existing Eclipse JDT debugger..
|59.||Qingzhou Luo, Sai Zhang, Jianjun Zhao, Min Hu, A lightweight and portable approach to making concurrent failures reproducible, 13th International Conference on Fundamental Approaches to Software Engineering, FASE 2010, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2010 Fundamental Approaches to Software Engineering - 13th International Conference, FASE 2010, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2010, Proceedings, 10.1007/978-3-642-12029-9_23, 323-337, 2010.04, Concurrent programs often exhibit bugs due to unintended interferences among the concurrent threads. Such bugs are often hard to reproduce because they typically happen under very specific interleaving of the executing threads. Basically, it is very hard to fix a bug (or software failure) in concurrent programs without being able to reproduce it. In this paper, we present an approach, called ConCrash, that automatically and deterministically reproduces concurrent failures by recording logical thread schedule and generating unit tests. For a given bug (failure), ConCrash records the logical thread scheduling order and preserves object states in memory at runtime. Then, ConCrash reproduces the failure offline by simply using the saved information without the need for JVM-level or OS-level support. To reduce the runtime performance overhead, ConCrash employs a static data race detection technique to report potential possible race conditions, and only instruments such places. We implement the ConCrash approach in a prototype tool for Java and experimented on a number of multi-threaded Java benchmarks. As a result, we successfully reproduced a number of real concurrent bugs (e.g., deadlocks, data races and atomicity violation) within an acceptable overhead..|
|60.||Yu Lin, Xucheng Tang, Yuting Chen, Jianjun Zhao, A divergence-oriented approach to adaptive random testing of Java programs, 24th IEEE/ACM International Conference on Automated Software Engineering, ASE2009 ASE2009 - 24th IEEE/ACM International Conference on Automated Software Engineering, 10.1109/ASE.2009.13, 221-232, 2009.12, Adaptive Random Testing (ART) is a testing technique which is based on an observation that a test input usually has the same potential as its neighbors in detection of a specific program defect. ART helps to improve the efficiency of random testing in that test inputs are selected evenly across the input spaces. However, the application of ART to object-oriented programs (e.g., C++ and Java) still faces a strong challenge in that the input spaces of object-oriented programs are usually high dimensional, and therefore an even distribution of test inputs in a space as such is difficult to achieve. In this paper, we propose a divergence-oriented approach to adaptive random testing of Java programs to address this challenge. The essential idea of this approach is to prepare for the tested program a pool of test inputs each of which is of significant difference from the others, and then to use the ART technique to select test inputs from the pool for the tested program. We also develop a tool called ARTGen to support this testing approach, and conduct experiment to test several popular open-source Java packages to assess the effectiveness of the approach. The experimental result shows that our approach can generate test cases with high quality..|
|61.||Martin Th Görg, Jianjun Zhao, Identifying semantic differences in AspectJ programs, 18th International Symposium on Software Testing and Analysis, ISSTA 2009 Proceedings of the 18th International Symposium on Software Testing and Analysis, ISSTA 2009, 10.1145/1572272.1572276, 25-35, 2009.07, Program differencing is a common means of software de-bugging. Although many differencing algorithms have been proposed for procedural and object-oriented languages like C and Java, there is no differencing algorithm for aspect-oriented languages so far. In this paper we propose an approach for difference analysis of aspect-oriented programs. The proposed algorithm contains a novel way of matching two versions of a module of which the signature has been modifed. For this, we also work out a set of well defined signatures for the new elements in the AspectJ language. In accordance with these signatures, and with those existent for elements of the Java language, we investigate a set of signature patterns to be used with the module matching algorithm. Furthermore, we demonstrate successful application of a node-by-node comparison algorithm originally developed for object-oriented programs. Using a tool which implements our algorithms, we set up and evaluate a set of test cases. The results demonstrate the effectiveness of our approach for a large subset of the AspectJ language..|
|62.||Yu Lin, Sai Zhang, Jianjun Zhao, Incremental call graph reanalysis for AspectJ software, 2009 IEEE International Conference on Software Maintenance, ICSM 2009 2009 IEEE International Conference on Software Maintenance, ICSM 2009 - Proceedings of the Conference, 10.1109/ICSM.2009.5306311, 306-315, 2009, Program call graph representation can be used to support many tasks in compiler optimization, program comprehension, and software maintenance. During software evolution, the call graph needs to remain fairly precise and be updated quickly in response to software changes. In this paper, we present an approach to incremental update, instead of exhaustive analysis of the initially constructed call graph in AspectJ software. Our approach first decomposes the source code edits between the updated and initial software versions into a set of atomic change representations, which capture the semantic differences. Then, we explore the relationship between atomic changes and call graph to incrementally update the initially constructed graph, instead of rebuilding it from the ground up. We implement the reanalysis approach on top of the ajc AspectJ compiler and perform an empirical study on 24 versions of eight AspectJ benchmarks. The experiment result shows that our approach can reduce a large portion of unnecessary reanalysis cost as program changes occur, and significant savings are observed for the incremental reconstruction of AspectJ call graph in comparison with an exhaustive analysis, with no loss in precision..|
|63.||Sai Zhang, Zhongxian Gu, Yu Lin, Jianjun Zhao, AutoFlow
An automatic debugging tool for AspectJ software, 24th IEEE International Conference on Software Maintenance, ICSM 2008 Proceedings of the 24th IEEE International Conference on Software Maintenance, ICSM 2008, 10.1109/ICSM.2008.4658109, 470-471, 2008.12, Aspect-oriented programming (AOP) is gaining popularity with the wider adoption of languages such as AspectJ. During AspectJ software evolution, when regression tests fail, it may be tedious for programmers to find out the failure-inducing changes by manually inspecting all code editing. To eliminate the expensive effort spent on debugging, we developed AutoFlow, an automatic debugging tool for AspectJ software. AutoFlow integrates the potential of delta debugging algorithm with the benefit of change impact analysis to narrow down the search for faulty changes. It first uses change impact analysis to identify a subset of responsible changes for a failed test, then ranks these changes according to our proposed heuristic (indicating the likelihood that they may have contributed to the failure), and finally employs an improved delta debugging algorithm to determine a minimal set of faulty changes. The main feature of AutoFlow is that it can automatically reduce a large portion of irrelevant changes in an early phase, and then locate faulty changes effectively..
|64.||Sai Zhang, Zhongxian Gu, Yu Lin, Jianjun Zhao, Celadon
A change impact analysis tool for aspect-oriented programs, 30th International Conference on Software Engineering 2008, ICSE'08 ICSE'08 Proceedings of the 30th International Conference on Software Engineering 2008, 10.1145/1370175.1370184, 913-914, 2008.12, To reduce the manual effort of assessing potential affected program parts during software evolution, we develop a tool, called Celadon, which automates the change impact analysis for AspectJ programs. Celadon is implemented in the context of the Eclipse environment and designed as a plugin. It analyzes the source code of two AspectJ software versions, and decomposes their differences into a set of atomic changes together with their dependence relationships. The analysis result is reported in terms of impacted program parts and affected tests. For each affected test, Celadon also identifies a subset of affecting changes that are responsible for the test's behavior change. In particular, as one of its applications, Celadon helps facilitate fault localization by isolating failure-inducing changes for one specific affected test from other irrelevant changes..
|65.||Sai Zhang, Zhongxian Gu, Yu Lin, Jianjun Zhao, Change impact analysis for AspectJ programs, 24th IEEE International Conference on Software Maintenance, ICSM 2008 Proceedings of the 24th IEEE International Conference on Software Maintenance, ICSM 2008, 10.1109/ICSM.2008.4658057, 87-96, 2008.12, Change impact analysis is a useful technique for software evolution. It determines the effects of a source editing session and provides valuable feedbacks to the programmers for making correct decisions. Recently, many techniques have been proposed to support change impact analysis of procedural or object-oriented software, but seldom effort has been made for aspect-oriented software. In this paper we propose a new change impact analysis technique for AspectJ programs. At the core of our approach is the atomic change representation which captures the semantic differences between two versions of an AspectJ program. We also present an impact analysis model, based on AspectJ call graph construction, to determine the affected program fragments, affected tests and their responsible changes. The proposed techniques have been implemented in Celadon, a change impact analysis framework for AspectJ programs. We performed an empirical evaluation on 24 versions of eight AspectJ benchmarks. The result shows that our proposed technique can effectively perform change impact analysis and provide valuable information in AspectJ software evolution..|
|66.||Sai Zhang, Yu Lin, Zhongxian Gu, Jianjun Zhao, Effective identification of failure-inducing changes
A hybrid approach, 2008 SIGSOFT/SIGPLAN Workshop on Program Analysis for Software Tools and Engineering, PASTE '08 Proceedings of the 2008 SIGSOFT/SIGPLAN Workshop on Program Analysis for Software Tools and Engineering, PASTE '08, 10.1145/1512475.1512492, 77-83, 2008.12, When regression tests fail unexpectedly after a long session of editing, it may be tedious for programmers to find out the failure-inducing changes by manually inspecting all code edits. To eliminate the expensive effort spent on debugging, we present a hybrid approach, which combines both static and dynamic analysis techniques, to automatically identify the faulty changes. Our approach first uses static change impact analysis to isolate a subset of responsible changes for a failed test, then utilizes the dynamic test execution information to rank these changes according to our proposed heuristic (indicating the likelihood that they may have contributed to the failure), and finally employs an improved Three-Phase delta debugging algorithm, working from the coarse method level to the fine statement level, to find a minimal set of faulty statements. We implemented the proposed approach for both Java and AspectJ programs in our AutoFlow prototype. In our evaluation with two third-party applications, we demonstrate that this hybrid approach can be very effective: at least for the subjective programs we investigated, it takes significantly (almost 4X) fewer tests than the original delta debugging algorithm to locate the faulty code..
|67.||Sai Zhang, Zhongxian Gu, Yu Lin, Jianjun Zhao, Flota
A programmer assistant for locating faulty changes in AspectJ software evolution, 4th Linking Aspect Technology and Evolution Workshop, LATE'08 - Held at the 7th International Conference on Aspect-Oriented Software Development, AOSD 2008 Proceedings of the 4th Linking Aspect Technology and Evolution Workshop, LATE'08 - held at the 7th International Conference on Aspect-Oriented Software Development, 10.1145/1404953.1404959, 2008.12, As Aspect-Oriented Programming (AOP) wins more and more popularity, there is increasing interest in using aspects to implement crosscutting concerns in object-oriented software. During software evolution, source code editing and testing are interleaved activities to assure code quality. If regression tests fail unexpectedly after a long session of editing, it may be difficult for programmers to find out the failure causes. In this paper, we present Flota, a fault localization tool for AspectJ programs. When a regression test fails unexpectedly after a session of source changes, Flota first decomposes the differences between two program versions into a set of atomic changes, and then identifies a subset of affecting changes which is responsible for the failure. Programmers are allowed to select (and apply) suspected changes to the original program, constructing compliable intermediate versions. Thus, programmers can re-execute the failed test against these intermediate program versions to locate the exact faulty changes by iteratively selecting, applying and narrowing down the set of affecting changes. Flota is implemented on top of the ajc compiler and designed as an eclipse plugin. Our preliminary empirical study shows that Flota can assist programmers effectively to find a small set of faulty changes and provide valuable debugging support..
|68.||Jianjun Zhao, Maintenance support for aspect-oriented programs
Opportinuties and challenges, 24th IEEE International Conference on Software Maintenance, ICSM 2008 Proceedings of the 24th IEEE International Conference on Software Maintenance, ICSM 2008, 10.1109/ICSM.2008.4658115, 482-483, 2008.12.
|69.||Zengkai Ma, Jianjun Zhao, Test case prioritization based on analysis of program structure, 15th Asia-Pacific Software Engineering Conference, APSEC 2008 Proceedings - 15th Asia-Pacific Software Engineering Conference, APSEC 2008, 471-478, 2008.12, Test case prioritization techniques have been empirically proved to be effective in improving the rate of fault detection in regression testing. However, most of previous techniques assume that all the faults have equal severity, which dose not meet the practice. In addition, because most of the existing techniques rely on the information gained from previous execution of test cases or source code changes, few of them can be directly applied to non-regression testing. In this paper, aiming to improve the rate of severe faults detection for both regression testing and non-regression testing, we propose a novel test case prioritization approach based on the analysis of program structure. The key idea of our approach is the evaluation of testing-importance for each module (e.g., method) covered by test cases. As a proof of concept, we implement Apros, a test case prioritization tool, and perform an empirical study on two real, non-trivial Java programs. The experimental result represents that our approach could be a promising solution to improve the rate of severe faults detection..|
|70.||Haihao Shen, Sai Zhang, Jianjun Zhao, Jianhong Fang, Shiyuan Yao, XFindBugs
eXtended FindBugs for AspectJ, 2008 SIGSOFT/SIGPLAN Workshop on Program Analysis for Software Tools and Engineering, PASTE '08 Proceedings of the 2008 SIGSOFT/SIGPLAN Workshop on Program Analysis for Software Tools and Engineering, PASTE '08, 10.1145/1512475.1512490, 70-76, 2008.12, Aspect-oriented software development (AOSD) is gaining popularity with the wider adoption of languages such as AspectJ. However, though the state-of-the-art aspect-oriented programming environment (such as AJDT in the Eclipse IDE) provides powerful capabilities to check the syntactic or grammar errors in AspectJ programs, it fails to detect potential semantic defects in aspect-oriented software systems. In this paper, we present XFindBugs, an eXtended FindBugs for AspectJ, to help programmers find potential bugs in AspectJ applications through static analysis. XFindBugs supports 17 bug patterns to cover common error-prone features in an aspect-oriented system, and integrates the corresponding bug detectors into the FindBugs framework. We evaluate XFindBugs on a number of large-scale open source AspectJ projects (306,800 LOC in total). In our evaluation, XFindBugs confirms 7 reported bugs and finds 257 previously unknown defects. Our experiment also indicates that the bug patterns supported in XFindBugs exist in real-world software systems, even for mature applications by experienced programmers..
|71.||Ma Zengkai, Zhao Jianjun, Test case prioritization based on analysis of program structure, 15th Asia-Pacific Software Engineering Conference, APSEC 2008 Neonatal, Paediatric and Child Health Nursing, 10.1109/APSEC.2008.63, 471-478, 2008.12, Test case prioritization techniques have been empirically proved to be effective in improving the rate of fault detection in regression testing. However, most of previous techniques assume that all the faults have equal severity, which dose not meet the practice. In addition, because most of the existing techniques rely on the information gained from previous execution of test cases or source code changes, few of them can be directly applied to non-regression testing. In this paper, aiming to improve the rate of severe faults detection for both regression testing and non-regression testing, we propose a novel test case prioritization approach based on the analysis of program structure. The key idea of our approach is the evaluation of testing-importance for each module (e.g., method) covered by test cases. As a proof of concept, we implement Apros, a test case prioritization tool, and perform an empirical study on two real, non-trivial Java programs. The experimental result represents that our approach could be a promising solution to improve the rate of severe faults detection..|
|72.||Qiang Sun, Jianjun Zhao, Aspect-aware points-to analysis, 8th IEEE International Working Conference on Source Code Analysis and Manipulation, SCAM 2008 Proceedings - 8th IEEE International Working Conference on Source Code Analysis and Manipulation, SCAM 2008, 10.1109/SCAM.2008.30, 143-152, 2008.11, Points-to analysis is a fundamental analysis technique. whose results are useful in compiler optimization and software engineering tools. Although many points-to analysis algorithms have been proposed for procedural and objectoriented languages like C and Java, there is no pointsto analysis for aspect-oriented languages so far. Based on Andersen-style points-to analysis for Java, we propose flowand context-insensitive points-to analysis for AspectJ. The main idea is to perform the analysis crossing the boundary between aspects and classes. Therefore, our technique is able to handle the unique aspectual features. To investigate the effectiveness of our technique, we implement our analysis approach on top of the ajc AspectJ compiler and evaluate it on nine AspectJ benchmarks. The experimental result indicates that, compared to existing Java approaches, the proposed technique can achieve a significant higher precision and run in practical time and space..|
|73.||Haihao Shen, Sai Zhang, Jianjun Zhao, An empirical study of maintainability in aspect-oriented system evolution using coupling metrics, 2nd IFIP/IEEE International Symposium on Theoretical Aspects of Software Engineering, TASE 2008 Proceedings - 2nd IFIP/IEEE International Symposium on Theoretical Aspects of Software Engineering, TASE 2008, 10.1109/TASE.2008.17, 233-236, 2008.09, In this paper, we propose a fine-grained coupling metrics suite for aspect-oriented (AO) systems, to measure software changes during system evolution. We also present a correlation model in terms of intermediate processes, for better evaluating the relation between coupling metrics and system maintainability. To investigate the practicability of our proposed model, we have implemented a coupling metrics analysis tool called A JMetries and performed an empirical study on eight AspectJ benchmarks. The experiment result suggests that our correlation model provides useful information to evaluate the maintainability of AO systems..|
|74.||Yi Wang, Jianjun Zhao, Specifying pointcuts in AspectJ, 31st Annual International Computer Software and Applications Conference, COMPSAC 2007 Proceedings - 31st Annual International Computer Software and Applications Conference, COMPSAC 2007, 10.1109/COMPSAC.2007.196, 5-10, 2007.12, Program verification is a promising approach to improving program quality. To formally verify aspectoriented programs, we have to find a way to formally specify programs written in aspect-oriented languages. Pipa is a BISL tailored to AspectJ for specifying AspectJ programs. However, Pipa has not provided specification method for pointcuts in AspectJ programs. Based on the exist work of Pipa, and related issues, this paper proposes an approach to specifying pointcuts using purity conception in JML. This paper also provides several examples to illustrate our pointcut specification approach..|
|75.||Alessandro Garcia, Elisa Baniassad, Cristina Videira Lopes, Christa Schwanninger, Jianjun Zhao, 1st workshop on assessment of contemporary modularization techniques (ACoM.07), 29th International Conference on Software Engineering, ICSE 2007 Proceedings - 29th International Conference on Software Engineering, ICSE 2007; Companion Volume, 10.1109/ICSECOMPANION.2007.1, 144-145, 2007.01, A number of new modularization techniques are emerging to cope with the challenges of contemporary software engineering, such as aspect-oriented software development (AOSD), feature-oriented programming (FOP), and the like. The effective assessment of such emerging modularization technologies plays a pivotal role on: (i) a better understanding of their real benefits and drawbacks when compared to conventional development techniques, and (ii) their effective transfer to mainstream software development. The ACoM workshop is the first initiative to put together researchers and practitioners with different backgrounds in order to discuss the multi-faceted issues that emerge in the assessment and/or comparison of new modularization techniques. The workshop is strongly focused on discussions around short presentations and theme-specific groups..|
|76.||Sai Zhang, Jianjun Zhao, On identifying bug patterns in aspect-oriented programs, 31st Annual International Computer Software and Applications Conference, COMPSAC 2007 Proceedings - 31st Annual International Computer Software and Applications Conference, COMPSAC 2007, 10.1109/COMPSAC.2007.159, 431-438, 2007, Bug patterns are erroneous code idioms or bad coding practices that have been proved fail time and time again. They mainly arise from the misunderstanding of language features, the use of erroneous design patterns or simple mistakes sharing the common behaviors. Aspect-oriented programming (AOP) is a new technique to separate the cross-cutting concerns for improving modularity in software design and implementation. However, there is no effective debugging technique for aspect-oriented programs until now and none of the prior researches focused on the identification of bug patterns in aspect-oriented programs. In this paper, we present six bug patterns in AspectJ programming language and show the corresponding example for each bug pattern to help to illustrate the symptoms of these patterns. We take this as the first step to provide an underlying basis on testing and debugging of AspectJ programs..|
|77.||Tao Xie, Jianjun Zhao, Perspectives on automated testing of aspect-oriented programs, 3rd Workshop on Testing Aspect-Oriented Programs, WTAOP'07, held at the 6th International Conference on Aspect-Oriented Software Development Proceedings of the Third Workshop on Testing Aspect-Oriented Programs, WTAOP'07, held at the Sixth International Conference on Aspect-Oriented Software Development, 10.1145/1229384.1229386, 210, 7-12, 2007, Aspect-oriented software development is gaining popularity with the adoption of aspect-oriented languages in writing programs. To reduce the manual effort in assuring the quality of aspect-oriented programs, we have developed a set of techniques and tools for automated testing of aspect-oriented programs. This position paper presents our perspectives on automated testing techniques from three dimensions: testing aspectual behavior or aspectual composition, unit tests or integration tests, and test-input generation or test oracles. We illustrate automated testing techniques primarily through the last dimension in the perspectives. By classifying these automated testing techniques in the perspectives, we provide better understanding of these techniques and identify future directions for automated testing of aspect-oriented programs. This position paper also presents a couple of new techniques that we propose based on the perspectives..|
|78.||Elisa Baniassad, Kung Chen, Shigeru Chiba, Jan Hannemann, Hidehiko Masuhara, Shangping Ren, Jianjun Zhao, 2nd Asian workshop on aspect-oriented software development (AOAsia), 21st IEEE/ACM International Conference on Automated Software Engineering, ASE 2006 Proceedings - 21st IEEE/ACM International Conference on Automated Software Engineering, ASE 2006, 10.1109/ASE.2006.5, 2006.12.|
|79.||Tao Xie, Jianjun Zhao, A framework and tool supports for generating test inputs of AspectJ programs, 5th International Conference on Aspect-oriented Software Development 2006, AOSD'06 Proceedings of the 5th International Conference on Aspect-oriented Software Development 2006, AOSD'06, 10.1145/1119655.1119681, 190-201, 2006.12, Aspect-oriented software development is gaining popularity with the wider adoption of languages such as AspectJ To reduce the manual effort of testing aspects in AspectJ programs, we have developed a framework, called Aspectra, that automates generation of test inputs for testing aspectual behavior, i.e., the behavior implemented in pieces of advice or intertype methods defined in aspects. To test aspects, developers construct base classes into which the aspects are woven to form woven classes. Our approach leverages existing test-generation tools to generate test inputs for the woven classes; these test inputs indirectly exercise the aspects. To enable aspects to be exercised during test generation, Aspectra automatically synthesizes appropriate wrapper classes for woven classes. To assess the quality of the generated tests, Aspectra defines and measures aspectual branch coverage (branch coverage within aspects). To provide guidance for developers to improve test coverage, Aspectra also defines interaction coverage. We have developed tools for automating Aspectra's wrapper synthesis and coverage measurement, and applied them on testing 12 subjects taken from a variety of sources. Our experience has shown that Aspectra effectively provides tool supports in enabling existing test-generation tools to generate test inputs for improving aspectual branch coverage..|
|80.||Jianjun Zhao, Control-flow analysis and representation for aspect-oriented programs, 6th International Conference on Quality Software, QSIC 2006 Proceedings - Sixth International Conference on Quality Software, QSIC 2006, 10.1109/QSIC.2006.20, 38-45, 2006.12, Aspect-oriented programming (AOP) has been proposed as a technique for improving the separation of concerns in software design and implementation. The field of AOP has, so far, focused primarily on problem analysis, language design, and implementation. Even though the importance of program comprehension and software maintenance is known, it has received little attention in the aspect-oriented paradigm. However, as the software systems coded in AOP languages are accumulated, the development of techniques and tools to support program comprehension and software maintenance tasks for aspect-oriented software will become important. In order to understand and maintain aspect-oriented programs, abstract models for representing these programs are needed. In this paper, we present techniques to construct control-flow representations for aspect-oriented programs, and discuss some applications of the representations in a program comprehension and maintenance environment..|
|81.||Tao Xie, Jianjun Zhao, Darko Marinov, David Notkin, Detecting redundant unit tests for AspectJ programs, 17th International Symposium on Software Reliability Engineering, ISSRE 2006 Proceedings - 17th International Symposium on Software Reliability Engineering, ISSRE 2006, 10.1109/ISSRE.2006.21, 179-188, 2006.12, Aspect-oriented software development is gaining popularity with the adoption of languages such as AspectJ Testing is an important part in any software development, including aspect-oriented development. To automate generation of unit tests for AspectJ programs, we can apply the existing tools that automate generation of unit tests for Java programs. However, these tools can generate a large number of test inputs, and manually inspecting the behavior of the software on all these inputs is time consuming. We propose Raspect, a framework for detecting redundant unit tests for AspectJ programs. We introduce three levels of units in AspectJ programs: advised methods, advice, and intertype methods. We show how to detect at each level redundant tests that do not exercise new behavior. Our approach selects only non-redundant tests from the automatically generated test suites, thus allowing the developer to spend less time in inspecting this reduced set of tests. We have implemented Raspect and applied it on 12 subjects taken from a variety of sources; our experience shows that Raspect can effectively reduce the size of generated test suites for inspecting AspectJ programs..|
|82.||Jianjun Zhao, Tao Xie, Nan Li, Towards regression test selection for AspectJ programs, 2nd Workshop on Testing Aspect-oriented Programs, WTAOP '06 Proceedings of the 2nd Workshop on Testing Aspect-oriented Programs, WTAOP '06, 10.1145/1146374.1146378, 21-26, 2006.12, Regression testing aims at showing that code has not been adversely affected by modification activities during maintenance. Regression test selection techniques reuse tests from an existing test suite to test a modified program. By reusing such a test suite to retest modified programs, maintainers or testers can reduce the required testing effort. This paper presents a regression test selection technique for AspectJ programs. The technique is based on various types of control flow graphs that can be used to select from the original test suite test cases that execute changed code for the new version of the AspectJ program. The code-base technique operates on the control flow graphs of AspectJ programs. The technique can be applied to modified individual aspects or classes as well as the whole program that uses modified aspects or classes..|
|83.||Bi Xin Li, Xiao Cong Fan, Jun Pang, Jian Jun Zhao, A model for slicing JAVA programs hierarchically, Journal of Computer Science and Technology, 10.1007/BF02973448, 19, 6, 848-858, 2004.11, Program slicing can be effectively used to debug, test, analyze, understand and maintain object-oriented software. In this paper, a new slicing model is proposed to slice Java programs based on their inherent hierarchical feature. The main idea of hierarchical slicing is to slice programs in a stepwise way, from package level, to class level, method level, and finally up to statement level. The stepwise slicing algorithm and the related graph reachability algorithms are presented, the architecture of the Java program Analyzing TOol (JATO) based on hierarchical slicing model is provided, the applications and a small case study are also discussed..|
|84.||Jianjun Zhao, Complexity metrics for software architectures, IEICE Transactions on Information and Systems, E87-D, 8, 2152-2156, 2004.08, A large body of research in the measurement of software complexity at code level has been conducted, but little effort has been made to measure the architectural-level complexity of a software system. In this paper, we propose some architectural-level metrics which are appropriate for evaluating the architectural attributes of a software system. The main feature of our approach is to assess the architectural-level complexity of a software system by analyzing its formal architectural specification, and therefore the process of metric computation can be automated completely..|
|85.||Jianjun Zhao, Baowen Xu, Measuring aspect cohesion, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10.1007/978-3-540-24721-0_4, 54-68, 2004.01, Cohesion is an internal software attribute representing the degree to which the components are bound together within a software module. Cohesion is considered to be a desirable goal in software development, leading to better values for external attributes such as maintainability, reusability, and reliability. Aspect-oriented software development (AOSD) is a new technique to support separation of concerns in software development. AOSD introduces a new kind of component called aspect which is like a class, also consisting of attributes (aspect instance variables) and those modules such as advice, introduction, pointcuts, and methods. The cohesion for such an aspect is therefore mainly about how tightly the attributes and modules of aspects cohere. To test this hypothesis, cohesion measures for aspects are needed. In this paper, we propose an approach to assessing the aspect cohesion based on dependence analysis. To this end, we present various types of dependencies between attributes and/or modules in an aspect, and the aspect dependence graph (ADG) to explicitly represent these dependencies. Based on the ADG, we formally define some aspect cohesion measures. We also discuss the properties of these dependencies, and according to these properties, we prove that these measures satisfy the properties that a good measure should have..|
|86.||Limin Xiang, Kazuo Ushijiam, Kai Cheng, Jianjun Zhao, Cunwei Lu, O(1) time algorithm on BSR for constructing a binary search tree with best frequencies, 5th International Conference, PDCAT 2004 Lecture Notes in Computer Science, 10.1007/978-3-540-30501-9_48, 3320, 218-225, 2004, Constructing a binary search tree of n nodes with best frequencies needs Ω(n log n) time on RAM, and Ω(log n) time on n-processor EREW, CREW, or CRCW PRAM. In this paper, we propose an O(1) time algorithm on n-processor BSR PRAM for the problem, which is the first constant time solution to the problem on any model of computation..|
|87.||Cunwei Lu, Genki Cho, Jianjun Zhao, Practical 3-D image measurement system using monochrome-projection color-analysis technique, Proceedings of the Seventh IASTED International Conference on Computer Graphics and Imaging Proceedings of the Seventh IASTED International Conference on Computer Graphics and Imaging, 254-259, 2004, A new pattern projection technique for measuring 3D topography is presented, in order to shorten the measurement time and to improve measurement accuracy. The Monochrome-Projection Color-Analysis (MPCA) technique is proposed to measure an object with a complicated surface color distribution. An optimal color channel is chosen and a single channel image for intensity calculation is compounded so that the greatest amount of information from an observation pattern image is used. Moreover, in order to measure a greater number of stripes in a single projection, Optimal Intensity-Modulation Projection (OIMP) technology is adopted. By using of combination of MPCA and OIMP, about 100 stripes are reliably detectable in a single optimal pattern projection and double image capture..|
|88.||Jianjun Zhao, Cunwei Lu, Baowen Xu, A toolkit for Java bytecode analysis, Proceedings of the Seventh IASTED International Conference on Software Engineering and Applications Proceedings of the Seventh IASTED International Conference on Software Engineering and Applications, 482-487, 2003.12, In Java, programs are being compiled into a portable binary format call bytecode. Every class is represented by a single class file containing class related data and bytecode instructions. Recently, more and more Java applications are routinely transmitted over the internet as compressed class file archives (i.e., zip files and jar files). However, instead of class files, the source code of applications is usually unavailable for the user, making it difficult to be understood and maintained. As a result, the development of techniques and tools to support analysis of Java bytecode programs is important. In this paper we describe a toolkit, called Kafer, that supports the development of software engineering tools for Java Bytecode programs. The Kafer is a prototype implementation of the techniques proposed in our previous work..|
|89.||Jianjun Zhao, Data-Flow-Based Unit Testing of Aspect-Oriented Programs, Proceedings: 27th Annual International Computer Software and Applications Conference, COMPSAC 2003 Proceedings - IEEE Computer Society's International Computer Software and Applications Conference, 188-197, 2003.12, The current research so far in aspect-oriented software development is focused on problem analysis, software design, and implementation techniques. Even though the importance of software testing is known, it has received little attention in the aspect-oriented paradigm. In this paper, we propose a data-flow-based unit testing approach for aspect-oriented programs. Our approach tests two types of units for an aspect-oriented program, i.e., aspects that are modular units of crosscutting implementation of the program, and those classes whose behavior may be affected by one or more aspects. For each aspect or class, our approach performs three levels of testing, i.e., intra-module, inter-module, and intra-aspect or intra-class testing. For an individual module such as a piece of advice, a piece of introduction, and a method, we perform intra-module testing. For a public module along with other modules it calls in an aspect or class, we perform inter-module testing. For modules that can be accessed outside the aspect or class and can be invoked in any order by users of the aspect or class, we perform intra-aspect or intra-class testing. Our approach can handle unit testing problems that are unique to aspect-oriented programs. We use control flow graphs to compute def-use pairs of an aspect or class being tested and use such information to guide the selection of tests for the aspect or class..|
|90.||Jianjun Zhao, Martin Rinard, Pipa
A behavioral interface specification language for AspectJ, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2621, 150-165, 2003.12, Pipa is a behavioral interface specification language (BISL) tailored to AspectJ, an aspect-oriented programming language. Pipa is a simple and practical extension to the Java Modeling Language (JML), a BISL for Java. Pipa uses the same basic approach as JML to specify AspectJ classes and interfaces, and extends JML, with just a few new constructs, to specify AspectJ aspects. Pipa also supports aspect specification inheritance and crosscutting. This paper discusses the goals and overall approach of Pipa. It also provides several examples of Pipa specifications and discusses how to transform an AspectJ program together with its Pipa specification into a corresponding Java program and JML specification. The goal is to facilitate the use of existing JML-based tools to verify AspectJ programs..
|91.||Jianjun Zhao, Hongji Yang, Liming Xiang, Baowen Xu, Change impact analysis to support architectural evolution, Journal of Software Maintenance and Evolution, 10.1002/smr.258, 14, 5, 317-333, 2002.09, Change impact analysis is a useful technique in software maintenance and evolution. Many techniques have been proposed to support change impact analysis at the code level of software systems, but little effort has been made for change impact analysis at the architectural level. In this paper, we present an approach to supporting change impact analysis at the architectural level of software systems based on an architectural slicing and chopping technique. The main feature of our approach is to assess the effect of changes in a software architecture by analyzing its formal architectural specification, and, therefore, the process of change impact analysis can be automated completely..|
|92.||Limin Xiang, Kazuo Ushijima, Jianjun Zhao, Time optimal n-size matching parentheses and binary tree decoding algorithms on a p-processor BSR, Parallel Processing Letters, 12, 3-4, 365-374, 2002.09, Time optimal algorithms on an n-processor BSR PRAM for many n-size problems can be found in the literature. They outpace those on EREW, CREW or CRCW PRAM for the same problems. When only p (1 < p < n) processors are available, efficient algorithms on a p-processor BSR for some n-size problems can not be obtained from those on an n-processor BSR, and they have to be reconsidered. In this paper, we discuss and give two algorithms on a p-processor BSR for the two n-size problems of matching parentheses and decoding a binary tree from its bit-string, respectively, and show that they are time optimal..|
|93.||Jianjun Zhao, Slicing aspect-oriented software, 10th International Workshop on Program Comprehension, IWPC 2002 Proceedings - 10th International Workshop on Program Comprehension, IWPC 2002, 10.1109/WPC.2002.1021346, 251-260, 2002.01, Program slicing has many applications in software engineering activities including program comprehension, debugging, testing, maintenance, and model checking. In this paper, we propose an approach to slicing aspect-oriented software. To solve this problem, we present a dependence-based representation called aspect-oriented system dependence graph (ASDG), which extends previous dependence graphs, to represent aspect-oriented software. The ASDG of an aspect-oriented program consists of three parts: a system dependence graph for non-aspect code, a group of dependence graphs for aspect code, and some additional dependence arcs used to connect the system dependence graph to the dependence graphs for aspect code. After that, we show how to compute a static slice of an aspect-oriented program based on the ASDG..|
|94.||Zhenqiang Chen, Baowen Xu, Jianjun Zhao, Hongji Yang, Static dependency analysis for concurrent ada 95 programs, 7th International Conference on Reliable Software Technologies, Ada-Europe 2002 Reliable Software Technologies - Ada-Europe 2002 - 7th Ada-Europe International Conference on Reliable Software Technologies, Proceedings, 10.1007/3-540-48046-3_17, 219-230, 2002.01, Program dependency analysis is an analysis technique to identify and determine various program dependencies in program source codes. It is an important approach for testing, understanding, maintaining and transforming programs. But, there are still many difficulties to be solved when carrying out dependency analysis for concurrent programs because the execution of statements is nondeterministic. In this paper, we propose a novel approach to analyze dependencies for concurrent Ada 95 programs. Two graphs: concurrent program flow graph and concurrent program dependency graph are developed to represent concurrent Ada programs and analyze dependency relations. The paper also presents a dependency analysis algorithm, which can obtain more precise information than most previous methods we know..|
|95.||Jianjun Zhao, Architectural-level metrics for software systems, Proceedings of the Sixth International Conference for Young Computer Scientist: In Computer Science and Technology in New Century Proceedings of the Sixth International Conference for You Computer Scientist in Computer Science and Technology in New Century, 29-33, 2001.12, A large body of research in the measurement of software complexity at code level has been studied, but little effort has been made for measuring the architectural-level complexity of a software system. In this paper, we propose some architectural metrics which are appropriate for evaluating the architectural attributes of a software system. The main feature of our approach is to assess the architectural-level complexity of a software system by analyzing its formal architectural specification, and therefore the process of metric computation can be automated completely..|
|96.||Jianjun Zhao, Static analysis of Java bytecode, Wuhan University Journal of Natural Sciences, 10.1007/BF03160273, 6, 1-2, 383-390, 2001.03, Understanding control flows in a computer program is essential for many software engineering tasks such as testing, debugging, reverse engineering, and maintenance. In this paper, we present a control flow analysis technique to analyze the control flow in Java bytecode. To perform the analysis, we construct a control flow graph(CFG) for Java bytecode not only at the intraprocedural level but also at the interprocedural level. We also discuss some applications of a CFG in a maintenance environment for Java bytecode..|
|97.||Jianjun Zhao, Jingde Cheng, Kazuo Ushijima, CLPKIDS
A program analysis system for concurrent logic programs, Proceedings - IEEE Computer Society's International Computer Software and Applications Conference, 10.1109/CMPSAC.2001.960664, 531-537, 2001.01, A program analysis system, CLPKIDS that supports development of software engineering tools for concurrent logic programs was described. The core modules of the system consisted of the program slicer, the declarative debugger and the maintenance support tool. The features of the system allowed the analysis to be performed in an unified framework that simplifies the implementation of the algorithms..
|98.||Jianjun Zhao, Jingde Cheng, Kazuo Ushijima, Computing executable slices for concurrent logic programs, 2nd Asia-Pacific Conference on Quality Software, APAQS 2001 Proceedings - 2nd Asia-Pacific Conference on Quality Software, APAQS 2001, 10.1109/APAQS.2001.989997, 13-22, 2001.01, Program Slicing has many applications an software engineering activities. However, until recently, no slicing algorithm has been presented that can compute executable slices for concurrent logic programs. In this paper we present a dependence-graph bused approach to computing executable slice for concurrent logic programs. The dependence-bused representation used in this paper is called the Argument Dependence Net which can be wed to explicitly represent various types of program dependences an a concurrent logic program. Based on the ADN, we can compute static executable slices for concurrent logic programs at argument level..|
|99.||Jianjun Zhao, Dynamic slicing of object-oriented programs, Wuhan University Journal of Natural Sciences, 10.1007/BF03160274, 6, 1-2, 391-397, 2001.01, Program slice has many applications such as program debugging, testing, maintenance, and complexity measurement. A static slice consists of all statements in program P that may effect the value of variable v at some point p, and a dynamic slice consists only of statements that influence the value of variable occurrence for specific program inputs. In this paper, we concern the problem of dynamic slicing of object-oriented programs which, to our knowledge, has not been addressed in the literatures. To solve this problem, we present the dynamic object-oriented dependence graph (DODG)which is an arc-classified digraph to explicitly represent various dynamic dependence between statement instances for a particular execution of an object-oriented program. Based on the DODG, we present a two-phase backward algorithm for computing a dynamic slice of an object-oriented program..|
|100.||Jianjun Zhao, Slicing-based approach to extracting reusable software architectures, The 4th European Conference on Software Maintenance and Reegineering - CSMR 2000 Proceedings of the European Conference on Software Maintenance and Reengineering, CSMR, 215-223, 2000.01, An alternative approach to developing reusable components from scratch is to recover them from existing systems. Although numerous techniques have been proposed to recover reusable components from existing systems, most have focused on implementation code, rather than software architecture. In this paper, we apply architectural slicing to extract reusable architectural elements (i.e., components and connectors) from the existing architectural specification of a software system. Unlike traditional program slicing, which operates on the source code of a program to provide the low-level implementation details of the program, architectural slicing directly operates on the architectural specification of a software system, and therefore can provide useful knowledge about the high-level architecture of the system..|
|101.||Jianjun Zhao, Dependence analysis of Java bytecode, Unknown Journal, 486-491, 2000, A dependence analysis technique to Java bytecode is presented. Various types of primary dependencies in Java bytecode at the intraprocedural level are defined. Some applications of the technique in software engineering tasks related to Java bytecode development including Java bytecode slicing, understanding and testing are discussed. The technique can also be used as an underlying base to develop other software engineering tools to aid debugging, reengineering and reverse engineering for Java bytecode..|
|102.||Jianjun Zhao, Dependence analysis at the architectural level, Chinese Journal of Advanced Software Research, 6, 2, 164-168, 1999.12, This paper introduces a new dependence analysis technique, named architectural dependence analysis to support software architecture development. In contrast to traditional dependence analysis, architectural dependence analysis is designed to operate on a formal architectural specification of a software system, rather than the source code of a conventional program. Architectural dependence analysis provides knowledge of dependences for the high-level architecture of a software system, rather than the low-level implementation details of a conventional program..|
|103.||Jianjun Zhao, Multithreaded dependence graphs for concurrent Java programs, 1999 International Symposium on Software Engineering for Parallel and Distributed Systems, PDSE 1999 Proceedings - International Symposium on Software Engineering for Parallel and Distributed Systems, PDSE 1999, 10.1109/PDSE.1999.779735, 13-23, 1999.01, Understanding program dependencies in a computer program is essential for many software engineering activities including program slicing, testing, debugging, reverse engineering, and maintenance. We present a dependence-based representation called multithreaded dependence graph, which extends previous dependence-based representations, to represent program dependencies in a concurrent Java program. We also discuss some important applications of a multithreaded dependence graph in a maintenance environment for concurrent Java programs..|
|104.||Jianjun Zhao, Slicing concurrent Java programs, 7th International Workshop on Program Comprehension, IWPC 1999 Proceedings - 7th International Workshop on Program Comprehension, IWPC 1999, 10.1109/WPC.1999.777751, 126-133, 1999.01, Although many slicing algorithms have been proposed for object oriented programs, no slicing algorithm has been proposed which can be used to handle the problem of slicing concurrent Java programs correctly. We propose a slicing algorithm for concurrent Java programs. To slice concurrent Java programs, we present a dependence based representation called multithreaded dependence graph, which extends previous dependence graphs to represent concurrent Java programs. We also show how static slices of a concurrent Java program can be computed efficiently based on its multithreaded dependence graph..|
|105.||Jianjun Zhao, Jingde Cheng, Kazuo Ushijima, A dependence-based representation for concurrent object-oriented software maintenance, 2nd Euromicro Conference on Software Maintenance and Reengineering, CSMR 1998 Proceedings of the 2nd Euromicro Conference on Software Maintenance and Reengineering, CSMR 1998, 10.1109/CSMR.1998.665734, 60-66, 1998.01, Software maintenance is a costly process because each modification to a program must take into account many complex dependence relationships in the existing software. An understanding of program dependences is therefore an inevitable step to efficient software change. We propose a dependence based representation named the system dependence net (SDN), which extends previous dependence based representations to represent various program dependences in concurrent object oriented programs. An SDN of a concurrent object oriented program consists of a collection of dependence graphs each representing a main procedure, a free standing procedure, or a method in a class of the program. It also consists of some additional arcs to represent direct dependences between a call and the called procedure/method and transitive interprocedural data dependences. An SDN can be used to represent either object oriented features or concurrency issues in a concurrent object oriented program, and can be used as an underlying representation in a maintenance environment for concurrent object oriented programs..|
|106.||Jianjun Zhao, Jingde Cheng, Kazuo Ushijima, A metrics suite for concurrent logic programs, 2nd Euromicro Conference on Software Maintenance and Reengineering, CSMR 1998 Proceedings of the 2nd Euromicro Conference on Software Maintenance and Reengineering, CSMR 1998, 10.1109/CSMR.1998.665796, 172-178, 1998.01, A large body of research in the measurement of software complexity has focused on imperative programs, but little effort has been made for logic programs. In this paper, complexity metrics for concurrent logic programs are proposed, which are specifically designed to quantify the information flow of concurrent logic programs. These metrics are defined based on the argument dependence net (ADN) of a concurrent logic program which is an arc-classified digraph to explicitly represent various program dependences between arguments in the program. The proposed metrics can be used to measure the complexity of a concurrent logic program from various different viewpoints..|
|107.||Jianjun Zhao, Applying slicing technique to software architectures, 4th IEEE International Conference on Engineering of Complex Computer Systems, ICECCS 1998 Proceedings - 4th IEEE International Conference on Engineering of Complex Computer Systems, ICECCS 1998, 10.1109/ICECCS.1998.706659, 87-98, 1998.01, Software architecture is receiving increasing attention as a critical level for software systems. As software architecture design resources (in the form of architectural specifications) are going to be accumulated, the development of techniques and tools to support architectural understanding, testing, reengineering, maintenance, and reuse will become an important issue. This paper introduces a new form of slicing, named architectural slicing, to aid architectural understanding and reuse. In contrast to traditional slicing, architectural slicing is designed to operate on the architectural specification of a software system, rather than the source code of a program. Architectural slicing provides knowledge about the high-level structure of a software system, rather than the low-level implementation details of a program. In order to compute an architectural slice, we present the architecture information flow graph which can be used to represent information flows in a software architecture. Based on the graph, we give a two-phase algorithm to compute an architectural slice..|
|108.||Jianjun Zhao, Jingde Cheng, Kazuo Ushijima, Static slicing of concurrent object-oriented programs, Proceedings of the 1996 IEEE 20th Annual International Computer Software & Applications Conference, COMPSAC'96 Proceedings - IEEE Computer Society's International Computer Software & Applications Conference, 312-320, 1996.01, Program slicing has many applications such as program debugging, testing, maintenance, and complexity measurement. This paper concerns the problem of slicing concurrent object-oriented programs that has not been addressed in the literature until now. To solve this problem, we propose a new program dependence representation named the system dependence net (SDN), which extends previous program dependence representations to represent concurrent object-oriented programs. An SDN of a concurrent object-oriented program consists of a collection of procedure dependence nets each representing a main procedure, a free standing procedure, or a method in a class of the program, and some additional arcs to represent direct dependences between a call and the called procedure/method and transitive interprocedural data dependences. We construct the SDN to represent not only object-oriented features but also concurrency issues in a concurrent object-oriented program. Once a concurrent object-oriented program is represented by its SDN, the slices of the program can be computed based on the SDN as a simple vertex reachability problem in the net..|
|109.||Jianjun Zhao, Jingde Cheng, Kazuo Ushijima, Program dependence analysis of concurrent logic programs and its applications, Proceedings of the 1996 International Conference on Parallel and Distributed Systems (ICPADS'96) Proceedings of the Internatoinal Conference on Parallel and Distributed Systems - ICPADS, 282-291, 1996, In this paper a formal model for program dependence analysis of concurrent logic programs is proposed with the following contributions. First, two language-independent program representations are presented for explicitly representing control flows and/or data flows in a concurrent logic program. Then based on these representations, program dependences between literals in concurrent logic programs are defined formally, and a dependence-based program representation named the Literal Dependence Net (LDN) is presented for explicitly representing primary program dependences in a concurrent logic program. Finally, as applications of the LDNs, some important software engineering activities including program slicing, debugging, testing, complexity measurement, and maintenance are discussed in a programming environment for concurrent logic programs..|