Help us improve Soot by giving us your feedback!

Soot LogoOver the past years, Soot has seen a larger and larger user base. It makes us happy that so many people find Soot useful, and we particularly enjoy also the help we have received in terms of feedback, bug reports, bug fixes or even newly contributed features. Thanks for giving back!

Early 2017 we plan to apply for government funding to aid the future development and maintenance of Soot. 

Sounds great? Then please support us by filling out this little web form.

That way you can help us in two ways:

  • By letting us know how we can improve Soot you can directly help us prioritize newly planned features.
  • By stating your name and affiliation you help us showcasing Soot’s large user base.


Harvester scores 1st place at German IT-Sicherheitspreis

Copyright: Catharina Frank

Copyright: Catharina Frank

Harvester has scored 1st place at this year’s German IT-Sicherheitspreis! Every two years, the Horst Görtz Foundation awards EUR 200,000 to the three winners, with the first place being awarded EUR 100,000. More at Heise (in German)

Many thanks to the judges and the foundation! Harvester will is planned to be available soon as a plugin into our CodeInspect framework. We are further thinking of providing a command-line version to interested customers.

Talks at the Second International Workshop on Agile Secure Software Development (ASSD’16)

The workshop was an opportunity to share experiences and ideas about developing secure software using the agile processes. Hasan Yasar from the Software Engineering Institute (SEI), CMU opened the sessions with a keynote talk on the experience of SEI in integrating security to DevOps for its clients.  He talked about on how DevOps principles were applied to develop secure applications, from idea to delivery of a completed application to production environment, also including operational support and incident response. He explained how to address application security concerns at early development lifecycle and how to address threats at different decisions point using integrated DevOps platforms. He emphasized the value of reference architecture, team collaboration, organization’s culture, considering security through development lifecycle, and continuous feedback.

The keynote was followed by a talk given by Vaishnavi Mohan, who discussed the aspects that SecDevOps authors have been discussing in the literature. Tosin Daniel Oyetoyan reported, in the second session, about his study on the relationships between software security skills, usage and training needs in Agile settings in two Norwegian companies. They observed that there is a positive linear relationship between the practice of security activities in software development projects and the skills and knowledge of the participating developers about software security. This suggests, if confirmed with more studies, that training developers on developing secure software leads to the production of secure software. Next Kalle Rindell gave an overview about the experience he has with his colleagues in building secure identity management system in an agile environment for the Finnish government. Lotfi ben Othmane discussed in the third talk the study they had on incremental security assurance for the e-commerce software, Zen Cart.

In the afternoon session Chad Heitzenrater discussed the application of economic utility functions within the negative use case development process to help development managers decide on the alternatives to mitigate the abuse cases. The presentation was followed by the talk of Daniela S. Cruzes, who reported about a study they performed to identify the factors that influence testing of nonfunctional requirements in agile teams, which are priority, culture, awareness, time pressure, cost, and technical issues.


The full ARES program, including the workshop program is available here

Time for Addressing Software Security Issues: Prediction Models and Impacting Factors

The second paper resulting from our collaboration with SAP on developing models for estimating the time to fix security issues is published by the Data Science and Engineering journal, Springer. We investigate, in this paper, quantitatively the major factors that impact the time it takes to fix a given security issue based on data collected automatically within SAP’s secure development process, and we show how the issue fix time could be used to monitor the fixing process. The work shows that the time it takes to fix an issue seems much more related to the component in which the potential vulnerability resides, the project related to the issue, the development groups that address the issue, and the closeness of the software release date. This indicates that the software structure, the fixing processes, and the development groups are the dominant factors that impact the time spent to address security issues. The models could be used to implement a continuous improvement of the secure software development processes and to measure the impact of individual improvements. The paper is published as open source and is available here.

An In-Depth Study of More Than Ten Years of Java Exploitation

I am happy and proud to present our first CCS paper! Co-authored with Philipp Holzinger, Stefan Triller and Alexandre Bartel, we present an in-depth study of all available Java exploits we were able to find online. The exploits cover all different sorts of attack vectors and more than 15 years, they highlight important weaknesses in the Java runtime. The study explains in detail the different weaknesses the exploits exploit. The paper is available here already. Further, we will soon make available some artifacts on this website (not the exploits, though).

Thanks to Marco Pistoia for his constructive feedback and Julian Dolby for providing us with the IBM JDKs we required for our study! Thanks also to Oracle which supported us through a Collaborative Research Grant and to the DFG’s Priority Program 1496 Reliably Secure Software Systems who funded the work through its project INTERFLOW!

See you all at Vienna!

Making Static Analysis More Accessible to Software Developers

Fraunhofer research fellow selected for the Grand Finals of the »ACM Student Research Competition 2016«

For her contribution to this year´s Student Research Competition of the »37th annual ACM SIGPLAN conference on Programming Language Design and Im-plementation (PLDI)«, a Fraunhofer research fellow scored first place in the PLDI Student Research Competition, and is therefore selected for the Grand Finals of the »ACM Student Research Competition.«

In industrial settings, static analysis is widely used to ensure code quality and security. It is still largely considered to be a batch-style activity, where code developers run static analysis tools, wait for it to finish, and then examine the results. This process often takes hours to complete, prompting development teams to run static analysis tools as part of nightly builds. During this time, the code often has evolved further, causing some errors to be obsolete already when they are reported. Additionally, end-user experience shows that developers have to deal with numerous results, and spend considerable effort in sorting false positives and prioritizing the warnings they would correct. The classification techniques offered by many tools allow developers to have a better overview of the analysis results, but the problem remains that an analysis is still treated as a black box, and software developers have a limited influence and understanding of what the analysis finds, and why it finds it.

Thien-Duyen Lisa Nguyen Quang Do, research fellow at Fraunhofer IEM (Department Software Engineering, Prof. Dr. Eric Bodden), researches the field of User-Centric Static Analysis. She advocates that static analyses should be able to accommodate specific users’ requirements about the behavior of the analysis in specific situations.

One of her ideas towards this goal, in collaboration with TU Darmstadt, Paderborn University, Microsoft Research, and NC State University, introduces a layered analysis framework in which the developer can explicitly direct the analysis, and control which paths it visits first. Analyses written in a layered manner deliver results of interest to the user in a short time, thus addressing the shortcomings of batch-style analyses. Layered analyses can be easily integrated in an Integrated Development Environment like the Eclipse IDE in a way that is more closely interacting with the code developer.

Lisa Nguyen presented her research at the Student Research Competition of the »37th annual ACM SIGPLAN conference on Programming Language Design and Implementation« (PLDI). PLDI was held from 13-17 June 2016 in Santa Barbara, California, United States.

PLDI is the premier conference in the field of programming languages, covering the topics of design, implementation, theory, and efficient use of programming languages. ACM’s Student Research Competition is an internationally recognized venue enabling undergraduate and graduate students to present research results and exchange ideas with other students, judges, and conference attendees. It spans over several premier conferences (such as PLDI, ICSE, CHI or FSE) where students present their research, first as an abstract, then, as a poster, and finally, in a presentation. The first graduate and undergraduate students selected at each conference are then invited to participate in the Grand Finals of the following year. Lisa Nguyen scored first place in the graduate category.

Become a Post-Doc Researcher at Paderborn University!

We are still looking for one to two postdoctoral researchers to complement our research group at Paderborn university. For further information, please consult our previous announcement here. As stated, please direct your applications to

If you have a deep interest in software engineering, especially software security, the I am very much looking forward to your application! In particular, I am interested in candidates with a proven track record (at least two papers at very reputable venues) in any of those subject areas:

  • Static and/or dynamic program analysis
  • Software Security
  • Systems Security
  • Applied (!) cryptography and/or cryptanalysis

Should we reject papers with bogus artifacts?

Yesterday I blogged about the accepted artifacts at ISSTA. I think it is worthwhile noting that out of the ten papers that got accepted and which had artifacts submitted there seven for which the artifacts checked out. That is a good thing!

What worries me, however, are the three papers for which Artifact evaluation failed. For those three papers, we were largely unable to reproduce their results, and yet the papers made it into the program. Moreover, for two of those three, the fact that they failed artifact evaluation was already known before the PC meeting, i.e., there would have been a chance to reject them.

The reason for why the PC did not is that only positive reviews were taken into account this time, in order not to discourage people from submitting artifacts in the future. We as a community should really think about whether we cannot find a way to make artifact evaluation the default so that people have no other chance than to submit all the evidence they have to back up their claims.