|
[88]
|
FIXX: finding exploits from examples
(Neil Thimmaiah, Yashaswi Dave, Rigel Gjomemo, V. N. Venkatakrishnan)
Proceedings of the 34th USENIX Security Symposium (SEC'25)
Abstract
Comprehensively analyzing modern-day web applications to detect different vulnerabilities and related exploits is challenging and time-consuming. Security researchers spend significant time discovering and creating vulnerabilities and exploiting disclosures. However, such disclosures are often limited to single vulnerability instances and do not contain information about other instances of the same vulnerability in the application. In this paper, we propose FIXX, a tool that can automatically find multiple similar exploits from taint-style vulnerabilities inside the same PHP application. FIXX aims to help web application developers detect all possible instances of a known exploit within the program's code. To do this, FIXX combines novel notions of path and graph similarity over graph representations of code. We evaluate FIXX on 32 CVE reports containing cross-site scripting and SQL injection vulnerabilities associated with 19 PHP applications and discover 1097 similar exploitable paths leading to 10 new CVE entries.
►bibtex
PDF
@inproceedings{Thimmaiah:fixx:usenixsec25,
abstract = {Comprehensively analyzing modern-day web applications to detect different vulnerabilities and related exploits is challenging and time-consuming. Security researchers spend significant time discovering and creating vulnerabilities and exploiting disclosures. However, such disclosures are often limited to single vulnerability instances and do not contain information about other instances of the same vulnerability in the application. In this paper, we propose FIXX, a tool that can automatically find multiple similar exploits from taint-style vulnerabilities inside the same PHP application. FIXX aims to help web application developers detect all possible instances of a known exploit within the program's code. To do this, FIXX combines novel notions of path and graph similarity over graph representations of code. We evaluate FIXX on 32 CVE reports containing cross-site scripting and SQL injection vulnerabilities associated with 19 PHP applications and discover 1097 similar exploitable paths leading to 10 new CVE entries.},
address = {Seattle, WA, USA},
articleno = 426,
author = {Neil Thimmaiah and Yashaswi Dave and Rigel Gjomemo and V. N. Venkatakrishnan},
booktitle = {Proceedings of the 34th USENIX Security Symposium},
date-modified = {2026-02-16 08:47:55 -0600},
isbn = {978-1-939133-52-6},
keywords = {exploit generation; web application security;},
month = {Aug},
publisher = {USENIX Association},
series = {{SEC}'25},
title = {FIXX: finding exploits from examples},
url = {https://www.usenix.org/conference/usenixsecurity25/presentation/thimmaiah},
year = 2025,
bdsk-url-1 = {https://www.usenix.org/conference/usenixsecurity25/presentation/thimmaiah},
}
|
|
[87]
|
Real-time Analytics for APT Detection and Threat Hunting Using Cyber-threat Intelligence and Provenance Graphs (Invited Paper)
(V. N. Venkatakrishnan)
Proceedings of the 10th ACM International Workshop on Security and Privacy Analytics (IWSPA '25)
Abstract
The persistent and stealthy nature of Advanced Persistent Threats (APTs) poses a significant challenge to enterprise security. Traditional detection mechanisms often fall short in identifying coordinated multi-step attacks or leveraging the rich context available in Cyber Threat Intelligence (CTI). The three works presented tackle this problem from complementary angles -- real-time detection, correlation-based threat hunting, and automated intelligence extraction. A unifying thread across these three works is their shared reliance on provenance graphs as a powerful abstraction for capturing and reasoning about complex attacker behavior. Together, these approaches form a complementary ecosystem: Extractor extracts threat knowledge, POIROT hunts for manifestations of that knowledge, and HOLMES detects emergent threats in real-time, all grounded in a common graph-based representation of system activity and threat behavior. HOLMES introduces a real-time detection framework aimed at identifying the coordinated activities typical of APT campaigns. It does so by correlating suspicious information flows to generate a robust detection signal and constructing high-level provenance graphs that summarize attacker behavior for analyst response. Its evaluation shows high precision and low false alarm rates, supporting its applicability in live operational environments. POIROT builds on the growing use of CTI standards by actively leveraging the relationships between indicators---often underused in practice---for threat hunting. It treats the problem as a graph pattern matching task, aligning CTI-derived graphs with system-level provenance data obtained from kernel audits. Its novel similarity metric enables efficient search through massive graphs, revealing APT traces within minutes and demonstrating the operational utility of CTI relationship data. Extractor addresses the challenge of unstructured CTI reports by automatically transforming them into structured, machine-usable provenance graphs. Without requiring strict assumptions about the input text, it extracts concise behavioral indicators that can be fed into threat-hunting tools like POIROT, bridging the gap between raw intelligence and analytical application. Together, these systems represent a shift toward graph-based, intelligence-driven detection and response. They emphasize the value of integrating real-time monitoring with structured threat intelligence and automation, setting the stage for more adaptive and effective cybersecurity operations.
►bibtex
PDF DOI: 10.1145/3716815.3729016
@inproceedings{Venkatakrishnan:keynote:IWSPA25,
abstract = {The persistent and stealthy nature of Advanced Persistent Threats (APTs) poses a significant challenge to enterprise security. Traditional detection mechanisms often fall short in identifying coordinated multi-step attacks or leveraging the rich context available in Cyber Threat Intelligence (CTI). The three works presented tackle this problem from complementary angles -- real-time detection, correlation-based threat hunting, and automated intelligence extraction. A unifying thread across these three works is their shared reliance on provenance graphs as a powerful abstraction for capturing and reasoning about complex attacker behavior. Together, these approaches form a complementary ecosystem: Extractor extracts threat knowledge, POIROT hunts for manifestations of that knowledge, and HOLMES detects emergent threats in real-time, all grounded in a common graph-based representation of system activity and threat behavior. HOLMES introduces a real-time detection framework aimed at identifying the coordinated activities typical of APT campaigns. It does so by correlating suspicious information flows to generate a robust detection signal and constructing high-level provenance graphs that summarize attacker behavior for analyst response. Its evaluation shows high precision and low false alarm rates, supporting its applicability in live operational environments. POIROT builds on the growing use of CTI standards by actively leveraging the relationships between indicators---often underused in practice---for threat hunting. It treats the problem as a graph pattern matching task, aligning CTI-derived graphs with system-level provenance data obtained from kernel audits. Its novel similarity metric enables efficient search through massive graphs, revealing APT traces within minutes and demonstrating the operational utility of CTI relationship data. Extractor addresses the challenge of unstructured CTI reports by automatically transforming them into structured, machine-usable provenance graphs. Without requiring strict assumptions about the input text, it extracts concise behavioral indicators that can be fed into threat-hunting tools like POIROT, bridging the gap between raw intelligence and analytical application. Together, these systems represent a shift toward graph-based, intelligence-driven detection and response. They emphasize the value of integrating real-time monitoring with structured threat intelligence and automation, setting the stage for more adaptive and effective cybersecurity operations.},
address = {Pittsburgh, PA, USA},
author = {V. N. Venkatakrishnan},
booktitle = {Proceedings of the 10th ACM International Workshop on Security and Privacy Analytics},
date-modified = {2026-02-16 18:17:34 -0600},
doi = {10.1145/3716815.3729016},
isbn = 9798400715013,
keywords = {provenance graphs, threat hunting, graph analytics; cyber threat intelligence},
location = {Pittsburgh, PA, USA},
month = {June},
series = {IWSPA '25},
title = {Real-time Analytics for APT Detection and Threat Hunting Using Cyber-threat Intelligence and Provenance Graphs (Invited Paper)},
url = {https://doi.org/10.1145/3716815.3729016},
year = 2025,
bdsk-url-1 = {https://doi.org/10.1145/3716815.3729016},
}
|
|
[86]
|
Citar: Cyberthreat Intelligence-driven Attack Reconstruction
(Sutanu Kumar Ghosh, Rigel Gjomemo, V. N. Venkatakrishnan)
Proceedings of the Fifteenth ACM Conference on Data and Application Security and Privacy (CODASPY '25), pp. 245–256
Abstract
Security Operation Centers (SOCs) are the first line of defense against an increasingly complex and sophisticated environment of advanced persistent threats (APTs). Inside SOCs, analysts deal with thousands of alerts every day and have to make real-time decisions about whether alerts are worth investigating further. However, they face several challenges in efficiently investigating a significant number of alerts daily and reconstructing attack scenarios from those alerts. In this paper, we present Citar, an approach for leveraging cyber threat intelligence (CTI) to facilitate attack scenario reconstruction. Citar enhances alert investigation by attributing alerts to potential attacker groups and examining audit logs for related attack instances. Utilizing a new correlation analysis developed for this purpose, we identify potential connections between flagged alerts and known attack behaviors present in a system. Citar is evaluated using a DARPA public dataset and 10 new attack scenarios (five real-world APT groups and five popular malwares). Our evaluation shows that augmenting existing detection mechanisms with Citar improves detection performance by up to 57\%, significantly aiding SOC analysts in alert investigations and attack reconstructions.
►bibtex
PDF DOI: 10.1145/3714393.3726519
@inproceedings{Ghosh:Citar:codaspy25,
abstract = {Security Operation Centers (SOCs) are the first line of defense against an increasingly complex and sophisticated environment of advanced persistent threats (APTs). Inside SOCs, analysts deal with thousands of alerts every day and have to make real-time decisions about whether alerts are worth investigating further. However, they face several challenges in efficiently investigating a significant number of alerts daily and reconstructing attack scenarios from those alerts. In this paper, we present Citar, an approach for leveraging cyber threat intelligence (CTI) to facilitate attack scenario reconstruction. Citar enhances alert investigation by attributing alerts to potential attacker groups and examining audit logs for related attack instances. Utilizing a new correlation analysis developed for this purpose, we identify potential connections between flagged alerts and known attack behaviors present in a system. Citar is evaluated using a DARPA public dataset and 10 new attack scenarios (five real-world APT groups and five popular malwares). Our evaluation shows that augmenting existing detection mechanisms with Citar improves detection performance by up to 57\%, significantly aiding SOC analysts in alert investigations and attack reconstructions.},
address = {New York, NY, USA},
author = {Sutanu Kumar Ghosh and Rigel Gjomemo and V. N. Venkatakrishnan},
booktitle = {Proceedings of the Fifteenth ACM Conference on Data and Application Security and Privacy},
date-modified = {2026-02-16 08:38:33 -0600},
doi = {10.1145/3714393.3726519},
isbn = 9798400714764,
keywords = {advanced persistent threats, alert correlation, attack reconstruction, cyber threat intelligence},
location = {Pittsburgh, PA, USA},
month = {June},
numpages = 12,
pages = {245--256},
publisher = {Association for Computing Machinery},
series = {CODASPY '25},
title = {Citar: Cyberthreat Intelligence-driven Attack Reconstruction},
url = {https://doi.org/10.1145/3714393.3726519},
year = 2025,
bdsk-url-1 = {https://doi.org/10.1145/3714393.3726519},
}
|
|
[85]
|
SemFinder: A Semantics-Based Approach to Enhance Vulnerability Analysis in Web Applications
(Neil P. Thimmaiah, Rigel Gjomemo, V. N. Venkatakrishnan)
Proceedings of the Fifteenth ACM Conference on Data and Application Security and Privacy (CODASPY '25), pp. 30–41
Abstract
Modern web applications are becoming increasingly complex. They include multiple dynamic runtime constructs that are difficult to analyze by static application security testing (SAST) tools. These tools often use a graph representation of the code for their analysis. However, built statically, such graphs may miss important data and control flows dependent on runtime information. In addition, the presence of difficult-to-analyze code patterns in modern web applications, referred to as testability tarpits, further reduces the accuracy of statically built graphs. As a result, current SAST tools have several false negatives because of 'hidden' paths, which are not present in the graphs. In this paper, we present SemFinder, an approach designed to automatically detect such hidden paths. SemFinder uses natural language semantics to hypothesize connections between different locations in the code based on the meaning and similarity of the variables in those locations and test those hypotheses dynamically. We evaluate SemFinder on 30 PHP applications and discover 215 new exploitable hidden paths with respect to existing SAST tools, leading to the submission of 31 new CVEs.
►bibtex
PDF DOI: 10.1145/3714393.3726513
@inproceedings{Thimmaiah:SemFinder:codaspy25,
abstract = {Modern web applications are becoming increasingly complex. They include multiple dynamic runtime constructs that are difficult to analyze by static application security testing (SAST) tools. These tools often use a graph representation of the code for their analysis. However, built statically, such graphs may miss important data and control flows dependent on runtime information. In addition, the presence of difficult-to-analyze code patterns in modern web applications, referred to as testability tarpits, further reduces the accuracy of statically built graphs. As a result, current SAST tools have several false negatives because of 'hidden' paths, which are not present in the graphs. In this paper, we present SemFinder, an approach designed to automatically detect such hidden paths. SemFinder uses natural language semantics to hypothesize connections between different locations in the code based on the meaning and similarity of the variables in those locations and test those hypotheses dynamically. We evaluate SemFinder on 30 PHP applications and discover 215 new exploitable hidden paths with respect to existing SAST tools, leading to the submission of 31 new CVEs.},
address = {New York, NY, USA},
author = {Thimmaiah, Neil P. and Gjomemo, Rigel and V. N. Venkatakrishnan},
booktitle = {Proceedings of the Fifteenth ACM Conference on Data and Application Security and Privacy},
date-modified = {2026-02-16 08:31:42 -0600},
doi = {10.1145/3714393.3726513},
isbn = 9798400714764,
keywords = {PHP, software analysis, web application security},
location = {Pittsburgh, PA, USA},
month = {June},
numpages = 12,
pages = {30--41},
publisher = {Association for Computing Machinery},
series = {CODASPY '25},
title = {SemFinder: A Semantics-Based Approach to Enhance Vulnerability Analysis in Web Applications},
url = {https://doi.org/10.1145/3714393.3726513},
year = 2025,
bdsk-url-1 = {https://doi.org/10.1145/3714393.3726513},
}
|
|
[84]
|
Web Browser Security and Privacy
(V. N. Venkatakrishnan)
Encyclopedia of Cryptography, Security and Privacy (ECSP), pp. 1–2
Abstract
Web browser security and privacy collectively refers to (a) the integrity of the browser platform that accepts, processes, and communicates end-user data to web sites and (b) the confidentiality and integrity of this information exchanged.
►bibtex
PDF DOI: 10.1007/978-1-4419-5906-5_665
@incollection{venkatakrishnan:browser:encyclopedia24,
abstract = {Web browser security and privacy collectively refers to (a) the integrity of the browser platform that accepts, processes, and communicates end-user data to web sites and (b) the confidentiality and integrity of this information exchanged.},
author = {V. N. Venkatakrishnan},
booktitle = {Encyclopedia of Cryptography, Security and Privacy},
date-modified = {2026-02-17 20:40:24 -0600},
doi = {10.1007/978-1-4419-5906-5_665},
keywords = {browser security; browser extension;},
month = {Jan},
pages = {1--2},
publisher = {Springer},
series = {{ECSP}},
title = {Web Browser Security and Privacy},
url = {https://link.springer.com/rwe/10.1007/978-1-4419-5906-5_665},
year = 2025,
}
|
|
[83]
|
Applications of Formal Methods to Web Application Security
(V. N. Venkatakrishnan)
Encyclopedia of Cryptography, Security and Privacy (ECSP), pp. 1–3
Abstract
The use of formal methods in web application security refers to the use of techniques such as static analysis and model checking to analyze web application software for security properties.
►bibtex
PDF DOI: 10.1007/978-1-4419-5906-5_856
@incollection{Venkatakrishnan:FormalMethods:encyclopedia24,
abstract = {The use of formal methods in web application security refers to the use of techniques such as static analysis and model checking to analyze web application software for security properties.},
author = {V. N. Venkatakrishnan},
booktitle = {Encyclopedia of Cryptography, Security and Privacy},
date-modified = {2026-02-17 21:37:04 -0600},
doi = {10.1007/978-1-4419-5906-5_856},
keywords = {web application security; formal methods},
month = {Jan},
pages = {1--3},
publisher = {Springer},
series = {{ECSP}},
title = {Applications of Formal Methods to Web Application Security},
url = {https://link.springer.com/rwe/10.1007/978-1-4419-5906-5_856},
year = 2025,
}
|
|
[82]
|
Data Science and AI for Sustainable Futures: opportunities and challenges
(Gavin Shaddick, David Topping, TC Hales, Usama Kadri, Joanna Patterson, John Pickett, Ioan Petri, Stuart Taylor, Peiyuan Li, Ashish Sharma, V. N. Venkatakrishnan, Abhinav Wadhwa, Jennifer Ding, Ruth Bowyer, Omer Rana.)
Sustainability (Sust)
Abstract
►bibtex
PDF
@article{Shaddick:sustainability24,
author = {Gavin Shaddick and David Topping and TC Hales and Usama Kadri and Joanna Patterson and John Pickett and Ioan Petri and Stuart Taylor and Peiyuan Li and Ashish Sharma and V. N. Venkatakrishnan and Abhinav Wadhwa and Jennifer Ding and Ruth Bowyer and Omer Rana.},
date-added = {2026-02-15 21:13:31 -0600},
date-modified = {2026-02-16 23:14:20 -0600},
journal = {Sustainability},
keywords = {sustainability; artificial intelligence},
month = {December},
series = {{Sust}},
title = {Data Science and AI for Sustainable Futures: opportunities and challenges},
url = {https://www.mdpi.com/2071-1050/17/5/2019},
year = 2024,
bdsk-url-1 = {https://www.mdpi.com/2071-1050/17/5/2019},
}
|
|
[81]
|
ReactAppScan: Mining React Application Vulnerabilities via Component Graph
(Zhiyong Guo, Mingqing Kang, V. N. Venkatakrishnan, Rigel Gjomemo, Yinzhi Cao)
Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security (CCS '24), pp. 585–599
Abstract
React, a single-page application framework, has recently become popular among web developers due to its flexible and convenient management of web application states via a syntax extension to JavaScript, called JSX (JavaScript and XML). Despite its abundant functionalities, the security of React, especially vulnerability detection, still lags: many existing vulnerability detection works do not support JSX let alone React Data Flow introduced by React components. The only exception is CodeQL, which supports JSX syntax. However, CodeQL cannot properly track React Data Flow across different components for detecting vulnerabilities.In this paper, we design a novel framework, called ReactAppScan, which constructs a Component Graph (CoG) for tracking React Data Flow and detecting vulnerabilities following both JavaScript and React data flows. Specifically, ReactAppScan relies on abstract interpretation to build such a component graph via tracking component lifecycles and then detects vulnerabilities via finding paths between sources and sinks. Our evaluation shows that ReactAppScan detects 61 zero-day vulnerabilities in real-world React applications. We have responsibly reported all the vulnerabilities and so far six vulnerabilities have been fixed and two have been acknowledged.
►bibtex
PDF DOI: 10.1145/3658644.3670331
@inproceedings{Guo:ReactAppScan:CCS24,
abstract = {React, a single-page application framework, has recently become popular among web developers due to its flexible and convenient management of web application states via a syntax extension to JavaScript, called JSX (JavaScript and XML). Despite its abundant functionalities, the security of React, especially vulnerability detection, still lags: many existing vulnerability detection works do not support JSX let alone React Data Flow introduced by React components. The only exception is CodeQL, which supports JSX syntax. However, CodeQL cannot properly track React Data Flow across different components for detecting vulnerabilities.In this paper, we design a novel framework, called ReactAppScan, which constructs a Component Graph (CoG) for tracking React Data Flow and detecting vulnerabilities following both JavaScript and React data flows. Specifically, ReactAppScan relies on abstract interpretation to build such a component graph via tracking component lifecycles and then detects vulnerabilities via finding paths between sources and sinks. Our evaluation shows that ReactAppScan detects 61 zero-day vulnerabilities in real-world React applications. We have responsibly reported all the vulnerabilities and so far six vulnerabilities have been fixed and two have been acknowledged.},
address = {New York, NY, USA},
author = {Zhiyong Guo and Mingqing Kang and V. N. Venkatakrishnan and Rigel Gjomemo and Yinzhi Cao},
booktitle = {Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security},
date-modified = {2026-02-16 08:32:58 -0600},
doi = {10.1145/3658644.3670331},
isbn = 9798400706363,
keywords = {component graph, single-page application, vulnerability analysis},
location = {Salt Lake City, UT, USA},
month = {Oct},
numpages = 15,
pages = {585--599},
publisher = {Association for Computing Machinery},
series = {CCS '24},
title = {ReactAppScan: Mining React Application Vulnerabilities via Component Graph},
url = {https://doi.org/10.1145/3658644.3670331},
year = 2024,
bdsk-url-1 = {https://doi.org/10.1145/3658644.3670331},
}
|
|
[80]
|
TIPCE: A Longitudinal Threat Intelligence Platform Comprehensiveness Analysis
(Kiavash Satvat, Rigel Gjomemo, V. N. Venkatakrishnan)
Proceedings of the Fourteenth ACM Conference on Data and Application Security and Privacy (CODASPY '24), pp. 349–360
Abstract
Threat Intelligence (TI) serves as a vital component of cybersecurity, empowering organizations to combat cyber threats proactively. While existing research primarily focuses on analyzing threat intelligence feeds from Threat Intelligence Sharing Platforms (TISPs), the extensive data available within TISPs knowledge bases remains largely unexplored. This study aims to fill this gap by proposing a novel approach to perform the first in-depth empirical study of prominent TISPs' databases. To achieve this, we propose an innovative approach to construct a ground truth dataset of Indicators of Compromise (IOCs) derived from threat reports. We implement our approach in a tool called TIPCE, which processes over 50,000 threat reports, extracting more than 182K IOCs with high accuracy. TIPCE leverages this dataset to measure and study different features of four known TISP databases, including their coverage, overlap, and timeliness. Our results provide novel longitudinal insights into TISPs, including their distinct performance per IOC type and considerable overlap between TISP databases.
►bibtex
PDF DOI: 10.1145/3626232.3653278
@inproceedings{Satvat:TIPCE:codaspy24,
abstract = {Threat Intelligence (TI) serves as a vital component of cybersecurity, empowering organizations to combat cyber threats proactively. While existing research primarily focuses on analyzing threat intelligence feeds from Threat Intelligence Sharing Platforms (TISPs), the extensive data available within TISPs knowledge bases remains largely unexplored. This study aims to fill this gap by proposing a novel approach to perform the first in-depth empirical study of prominent TISPs' databases. To achieve this, we propose an innovative approach to construct a ground truth dataset of Indicators of Compromise (IOCs) derived from threat reports. We implement our approach in a tool called TIPCE, which processes over 50,000 threat reports, extracting more than 182K IOCs with high accuracy. TIPCE leverages this dataset to measure and study different features of four known TISP databases, including their coverage, overlap, and timeliness. Our results provide novel longitudinal insights into TISPs, including their distinct performance per IOC type and considerable overlap between TISP databases.},
address = {New York, NY, USA},
author = {Kiavash Satvat and Rigel Gjomemo and V. N. Venkatakrishnan},
booktitle = {Proceedings of the Fourteenth ACM Conference on Data and Application Security and Privacy},
date-modified = {2026-02-16 18:17:58 -0600},
doi = {10.1145/3626232.3653278},
isbn = 9798400704215,
keywords = {cyber threat intelligence},
location = {Porto, Portugal},
month = {June},
numpages = 12,
pages = {349--360},
publisher = {Association for Computing Machinery},
series = {CODASPY '24},
title = {TIPCE: A Longitudinal Threat Intelligence Platform Comprehensiveness Analysis},
url = {https://doi.org/10.1145/3626232.3653278},
year = 2024,
bdsk-url-1 = {https://doi.org/10.1145/3626232.3653278},
}
|
|
[79]
|
OCPPStorm: A Comprehensive Fuzzing Tool for OCPP Implementations
(Gaetano Coppoletta, Rigel Gjomemo, Amanjot Kaur, Nima Valizadeh, Omer Rana, V. N. Venkatakrishnan)
Proceedings of Vehicular Security Conference (VehicleSec'24)
Abstract
In the last decade, electric vehicles (EVs) have moved from a niche of the transportation sector to its most innovative, dynamic, and growing sector. The associated EV charging infrastructure is closely following behind. One of the main components of such infrastructure is the Open Charge Point Protocol (OCPP), which defines the messages exchanged between charging stations and central management systems owned by charging companies. This paper presents OCPPStorm, a tool for testing the security of OCPP implementations. OCPPStorm is designed as a black box testing tool, in order to be able to deal with different implementations, independently of their deployment peculiarities, platforms, or languages used. In particular, OCPPStorm applies fuzzing techniques to the OCPP messages to identify errors in the message management and find vulnerabilities among those errors. It's efficacy is demonstrated through extensive testing on two open-source OCPP systems, revealing its proficiency in uncovering critical security flaws, among which 5 confirmed CVEs and 7 under review. OCPPSTorm's goal is to bolster the methodological approach to OCPP security testing, thereby reinforcing the reliability and safety of the EV charging ecosystem.
►bibtex
PDF
@inproceedings{Coppoletta:vehiclesec24,
abstract = {In the last decade, electric vehicles (EVs) have moved from a niche of the transportation sector to its most innovative, dynamic, and growing sector. The associated EV charging infrastructure is closely following behind. One of the main components of such infrastructure is the Open Charge Point Protocol (OCPP), which defines the messages exchanged between charging stations and central management systems owned by charging companies. This paper presents OCPPStorm, a tool for testing the security of OCPP implementations. OCPPStorm is designed as a black box testing tool, in order to be able to deal with different implementations, independently of their deployment peculiarities, platforms, or languages used. In particular, OCPPStorm applies fuzzing techniques to the OCPP messages to identify errors in the message management and find vulnerabilities among those errors. It's efficacy is demonstrated through extensive testing on two open-source OCPP systems, revealing its proficiency in uncovering critical security flaws, among which 5 confirmed CVEs and 7 under review. OCPPSTorm's goal is to bolster the methodological approach to OCPP security testing, thereby reinforcing the reliability and safety of the EV charging ecosystem.},
address = {San Diego, California, USA},
author = {Gaetano Coppoletta and Rigel Gjomemo and Amanjot Kaur and Nima Valizadeh and Omer Rana and V. N. Venkatakrishnan},
booktitle = {Proceedings of Vehicular Security Conference},
date-added = {2026-02-15 20:48:23 -0600},
date-modified = {2026-02-16 08:33:09 -0600},
keywords = {EV charging; vehicle security;},
month = {Feb},
series = {{VehicleSec}'24},
title = {OCPPStorm: A Comprehensive Fuzzing Tool for OCPP Implementations},
url = {https://www.ndss-symposium.org/ndss-paper/auto-draft-457/},
year = 2024,
bdsk-url-1 = {https://www.ndss-symposium.org/ndss-paper/auto-draft-457/},
}
|
|
[78]
|
Privacy and trust in artificially intelligent cities
(Charlie Catlett, Juval Portugali, V. N. Venkatakrishnan)
The Crisis of Democracy in the Age of Cities (Chapters), None, pp. 167-183
Abstract
The notion of a smart city has too often been reduced to the use of technology to automate processes toward more efficiency and cost savings in areas such as transportation, public safety, or energy. These are important objectives to be sure, but the most vexing challenges faced by cities and their inhabitants are less obviously amenable to purely technical solutions, as these relate to human and societal needs such as for opportunity, fairness, safety, and justice. While cities attempt to manage their operations through smart city solutions, no amount of automation or efficiency will automatically make a city ``smart'' if these human and societal needs are not integrated with, or ideally driving, those solutions. Moreover, a focus on technological systems introduces at least two hidden dangers. First, managing complex technological systems requires controlling and monitoring those systems, leading to concepts such as an ``operating system'' for the city and a trend from automation to autonomous systems. This is partly fueled by increasing capabilities in artificial intelligence (AI) and machine learning (ML), in turn enabling ever more sophisticated autonomous systems. But operating systems are, predominantly, authoritarian systems and AI capabilities, while extraordinarily useful for many mechanical and mathematical functions, have yet to overcome critical challenges such as bias and judgment, much less understanding human concepts of opportunity or fairness. Second, new technologies introduce new capabilities not yet contemplated by society or governance structures and the potential for derivative capabilities that may not have been anticipated even by the system designers themselves. Thus, these new capabilities operate not only absent appropriate policy but in advance of a clear concept of what those appropriate policies should be! We discuss these challenges in the context of real-world smart city deployments, the impact of such technologies on assumptions about
►bibtex
PDF DOI:
@chapter{Catlett:ideas2023,
abstract = {The notion of a smart city has too often been reduced to the use of technology to automate processes toward more efficiency and cost savings in areas such as transportation, public safety, or energy. These are important objectives to be sure, but the most vexing challenges faced by cities and their inhabitants are less obviously amenable to purely technical solutions, as these relate to human and societal needs such as for opportunity, fairness, safety, and justice. While cities attempt to manage their operations through smart city solutions, no amount of automation or efficiency will automatically make a city ``smart'' if these human and societal needs are not integrated with, or ideally driving, those solutions. Moreover, a focus on technological systems introduces at least two hidden dangers. First, managing complex technological systems requires controlling and monitoring those systems, leading to concepts such as an ``operating system'' for the city and a trend from automation to autonomous systems. This is partly fueled by increasing capabilities in artificial intelligence (AI) and machine learning (ML), in turn enabling ever more sophisticated autonomous systems. But operating systems are, predominantly, authoritarian systems and AI capabilities, while extraordinarily useful for many mechanical and mathematical functions, have yet to overcome critical challenges such as bias and judgment, much less understanding human concepts of opportunity or fairness. Second, new technologies introduce new capabilities not yet contemplated by society or governance structures and the potential for derivative capabilities that may not have been anticipated even by the system designers themselves. Thus, these new capabilities operate not only absent appropriate policy but in advance of a clear concept of what those appropriate policies should be! We discuss these challenges in the context of real-world smart city deployments, the impact of such technologies on assumptions about},
author = {Charlie Catlett and Juval Portugali and V. N. Venkatakrishnan},
booktitle = {The Crisis of Democracy in the Age of Cities},
chapter = 9,
date-added = {2026-02-17 08:47:04 -0600},
date-modified = {2026-02-18 07:57:10 -0600},
doi = {},
edition = {None},
keywords = {Economics and Finance; Geography; Politics and Public Policy Urban and Regional Studies},
month = {Oct},
pages = {167-183},
publisher = {Edward Elgar Publishing},
series = {Chapters},
title = {Privacy and trust in artificially intelligent cities},
url = {https://ideas.repec.org/h/elg/eechap/21553_9.html},
volume = {None},
year = 2023,
bdsk-url-1 = {https://ideas.repec.org/h/elg/eechap/21553_9.html},
bdsk-url-2 = {https://doi.org/None},
}
|
|
[77]
|
Scaling JavaScript Abstract Interpretation to Detect and Exploit Node.js Taint-style Vulnerability
(Mingqing Kang, Yichao Xu, Song Li, Rigel Gjomemo, Jianwei Hu, V. N. Venkatakrishnan, Yinzhi Cao)
2023 IEEE Symposium on Security and Privacy (SP) (IEEESP'23), pp. 1059-1076
Abstract
Taint-style vulnerabilities, such as OS command injection and path traversal, are common and severe software weaknesses. There exists an inherent trade-off between analysis scalability and accuracy in detecting such vulnerabilities. On one hand, existing syntax-directed approaches often make compromises in the analysis accuracy on dynamic features like bracket syntax. On the other hand, existing abstract interpretation often faces the issue of state explosion in the abstract domain, thus leading to a scalability problem.In this paper, we present a novel approach, called FAST, to scale the vulnerability discovery of JavaScript packages via a novel abstract interpretation approach that relies on two new techniques, called bottom-up and top-down abstract interpretation. The former abstractly interprets functions based on scopes instead of call sequences to construct dynamic call edges. Then, the latter follows specific control-flow paths and prunes the program to skip statements unrelated to the sink. If an end-to-end data-flow path is found, FAST queries the satisfiability of constraints along the path and verifies the exploitability to reduce human efforts.We implement a prototype of FAST and evaluate it against real-world Node.js packages. We show that FAST is able to find 242 zero-day vulnerabilities in NPM with 21 CVE identifiers being assigned. Our evaluation also shows that FAST can scale to real-world applications such as NodeBB and popular frameworks such as total.js and strapi in finding legacy vulnerabilities that no prior works can.
►bibtex
DOI: 10.1109/SP46215.2023.10179352
@inproceedings{Kang:FAST:ieeesp23,
abstract = {Taint-style vulnerabilities, such as OS command injection and path traversal, are common and severe software weaknesses. There exists an inherent trade-off between analysis scalability and accuracy in detecting such vulnerabilities. On one hand, existing syntax-directed approaches often make compromises in the analysis accuracy on dynamic features like bracket syntax. On the other hand, existing abstract interpretation often faces the issue of state explosion in the abstract domain, thus leading to a scalability problem.In this paper, we present a novel approach, called FAST, to scale the vulnerability discovery of JavaScript packages via a novel abstract interpretation approach that relies on two new techniques, called bottom-up and top-down abstract interpretation. The former abstractly interprets functions based on scopes instead of call sequences to construct dynamic call edges. Then, the latter follows specific control-flow paths and prunes the program to skip statements unrelated to the sink. If an end-to-end data-flow path is found, FAST queries the satisfiability of constraints along the path and verifies the exploitability to reduce human efforts.We implement a prototype of FAST and evaluate it against real-world Node.js packages. We show that FAST is able to find 242 zero-day vulnerabilities in NPM with 21 CVE identifiers being assigned. Our evaluation also shows that FAST can scale to real-world applications such as NodeBB and popular frameworks such as total.js and strapi in finding legacy vulnerabilities that no prior works can.},
author = {Mingqing Kang and Yichao Xu and Song Li and Rigel Gjomemo and Jianwei Hu and V. N. Venkatakrishnan and Yinzhi Cao},
booktitle = {2023 IEEE Symposium on Security and Privacy (SP)},
date-modified = {2026-02-16 23:14:46 -0600},
doi = {10.1109/SP46215.2023.10179352},
issn = {2375-1207},
keywords = {Privacy;Prototypes; Node.js;Abstract-Interpretation;vulnerability analysis},
month = {May},
pages = {1059-1076},
series = {{IEEESP}'23},
title = {Scaling JavaScript Abstract Interpretation to Detect and Exploit Node.js Taint-style Vulnerability},
year = 2023,
bdsk-url-1 = {https://doi.org/10.1109/SP46215.2023.10179352},
}
|
|
[76]
|
System and method associated with expedient detection and reconstruction of cyber events in a compact scenario representation using provenance tags and customizable policy
(Sekar Ramasubramanian, Junao Wang, Md Nahid Hossain Sadegh M Milajerdi, Birhanu Eshete, Rigel Gjomemo, V. N. Venkatakrishnan, Scott D. Stoller)
U.S. Patent (US11601442B2)
Abstract
A system associated with detecting a cyber-attack and reconstructing events associated with a cyber-attack campaign, is disclosed. The system performs various operations that include receiving an audit data stream associated with cyber events. The system identifies trustworthiness values in a portion of data associated with the cyber events and assigns provenance tags to the portion of the data based on the identified trustworthiness values. An initial visual representation is generated based on the assigned provenance tags to the portion of the data. The initial visual representation is condensed based on a backward traversal of the initial visual representation in identifying a shortest path from a suspect node to an entry point node. A scenario visual representation is generated that specifies nodes most relevant to the cyber events associated with the cyber-attack based on the identified shortest path.A corresponding method and computer-readable medium are also disclosed.
►bibtex
PDF
@patent{Sekar:Patent23,
abstract = {A system associated with detecting a cyber-attack and reconstructing events associated with a cyber-attack campaign, is disclosed. The system performs various operations that include receiving an audit data stream associated with cyber events. The system identifies trustworthiness values in a portion of data associated with the cyber events and assigns provenance tags to the portion of the data based on the identified trustworthiness values. An initial visual representation is generated based on the assigned provenance tags to the portion of the data. The initial visual representation is condensed based on a backward traversal of the initial visual representation in identifying a shortest path from a suspect node to an entry point node. A scenario visual representation is generated that specifies nodes most relevant to the cyber events associated with the cyber-attack based on the identified shortest path.A corresponding method and computer-readable medium are also disclosed.},
author = {Sekar Ramasubramanian and Junao Wang and Md Nahid Hossain Sadegh M Milajerdi and Birhanu Eshete and Rigel Gjomemo and V. N. Venkatakrishnan and Scott D. Stoller},
booktitle = {U.S. Patent},
date-modified = {2026-02-17 09:16:03 -0600},
day = 7,
institution = {University of Illinois at Chicago},
keywords = {intrusion detection; attack reconstruction; provenance graphs},
month = {March},
number = {US11601442B2},
series = {US11601442B2},
title = {System and method associated with expedient detection and reconstruction of cyber events in a compact scenario representation using provenance tags and customizable policy},
type = {Patent},
url = {https://patents.google.com/patent/US11601442B2/en},
year = 2023,
bdsk-url-1 = {https://patents.google.com/patent/US11601442B2/en},
}
|
|
[71]
|
POIROT: Aligning Attack Behavior with Kernel Audit Records for Cyber Threat Hunting
(Sadegh M. Milajerdi, Birhanu Eshete, Rigel Gjomemo, V. N. Venkatakrishnan)
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS'19), pp. 1813–1830
Abstract
Cyber threat intelligence (CTI) is being used to search for indicators of attacks that might have compromised an enterprise network for a long time without being discovered. To have a more effective analysis, CTI open standards have incorporated descriptive relationships showing how the indicators or observables are related to each other. However, these relationships are either completely overlooked in information gathering or not used for threat hunting. In this paper, we propose a system, called POIROT, which uses these correlations to uncover the steps of a successful attack campaign. We use kernel audits as a reliable source that covers all causal relations and information flows among system entities and model threat hunting as an inexact graph pattern matching problem. Our technical approach is based on a novel similarity metric which assesses an alignment between a query graph constructed out of CTI correlations and a provenance graph constructed out of kernel audit log records. We evaluate POIROT on publicly released real-world incident reports as well as reports of an adversarial engagement designed by DARPA, including ten distinct attack campaigns against different OS platforms such as Linux, FreeBSD, and Windows. Our evaluation results show that POIROT is capable of searching inside graphs containing millions of nodes and pinpoint the attacks in a few minutes, and the results serve to illustrate that CTI correlations could be used as robust and reliable artifacts for threat hunting.
►bibtex
PDF DOI: 10.1145/3319535.3363217
@inproceedings{Milajerdi:ccs19,
abstract = {Cyber threat intelligence (CTI) is being used to search for indicators of attacks that might have compromised an enterprise network for a long time without being discovered. To have a more effective analysis, CTI open standards have incorporated descriptive relationships showing how the indicators or observables are related to each other. However, these relationships are either completely overlooked in information gathering or not used for threat hunting. In this paper, we propose a system, called POIROT, which uses these correlations to uncover the steps of a successful attack campaign. We use kernel audits as a reliable source that covers all causal relations and information flows among system entities and model threat hunting as an inexact graph pattern matching problem. Our technical approach is based on a novel similarity metric which assesses an alignment between a query graph constructed out of CTI correlations and a provenance graph constructed out of kernel audit log records. We evaluate POIROT on publicly released real-world incident reports as well as reports of an adversarial engagement designed by DARPA, including ten distinct attack campaigns against different OS platforms such as Linux, FreeBSD, and Windows. Our evaluation results show that POIROT is capable of searching inside graphs containing millions of nodes and pinpoint the attacks in a few minutes, and the results serve to illustrate that CTI correlations could be used as robust and reliable artifacts for threat hunting.},
address = {London, UK},
author = {Sadegh M. Milajerdi and Birhanu Eshete and Rigel Gjomemo and V. N. Venkatakrishnan},
booktitle = {Proceedings of the 2019 {ACM} {SIGSAC} Conference on Computer and Communications Security},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 18:17:21 -0600},
doi = {10.1145/3319535.3363217},
keywords = {cyber threat-hunting; provenance graphs; audit logs; graph analytics; cyber threat intelligence},
month = {Nov},
pages = {1813--1830},
publisher = {{ACM}},
series = {{CCS}'19},
title = {{POIROT:} Aligning Attack Behavior with Kernel Audit Records for Cyber Threat Hunting},
url = {https://doi.org/10.1145/3319535.3363217},
year = 2019,
bdsk-url-1 = {https://dblp.org/rec/conf/ccs/MilajerdiEGV19},
bdsk-url-2 = {https://doi.org/10.1145/3319535.3363217},
}
|
|
[70]
|
HOLMES: Real-Time APT Detection through Correlation of Suspicious Information Flows
(Sadegh Momeni Milajerdi, Rigel Gjomemo, Birhanu Eshete, R. Sekar, V. N. Venkatakrishnan)
2019 IEEE Symposium on Security and Privacy, SP 2019, San Francisco, CA, USA, May 19-23, 2019 (IEEESP'19), pp. 1137–1152
Abstract
In this paper, we present HOLMES, a system that implements a new approach to the detection of Advanced and Persistent Threats (APTs). HOLMES is inspired by several case studies of real-world APTs that highlight some common goals of APT actors. In a nutshell, HOLMES aims to produce a detection signal that indicates the presence of a coordinated set of activities that are part of an APT campaign. One of the main challenges addressed by our approach involves developing a suite of techniques that make the detection signal robust and reliable. At a high-level, the techniques we develop effectively leverage the correlation between suspicious information flows that arise during an attacker campaign. In addition to its detection capability, HOLMES is also able to generate a high-level graph that summarizes the attacker's actions in real-time. This graph can be used by an analyst for an effective cyber response. An evaluation of our approach against some real-world APTs indicates that HOLMES can detect APT campaigns with high precision and low false alarm rate. The compact high-level graphs produced by HOLMES effectively summarizes an ongoing attack campaign and can assist real-time cyber-response operations.
►bibtex
PDF DOI: 10.1109/SP.2019.00026
@inproceedings{Milajerdi:ieeesp19,
abstract = {In this paper, we present HOLMES, a system that implements a new approach to the detection of Advanced and Persistent Threats (APTs). HOLMES is inspired by several case studies of real-world APTs that highlight some common goals of APT actors. In a nutshell, HOLMES aims to produce a detection signal that indicates the presence of a coordinated set of activities that are part of an APT campaign. One of the main challenges addressed by our approach involves developing a suite of techniques that make the detection signal robust and reliable. At a high-level, the techniques we develop effectively leverage the correlation between suspicious information flows that arise during an attacker campaign. In addition to its detection capability, HOLMES is also able to generate a high-level graph that summarizes the attacker's actions in real-time. This graph can be used by an analyst for an effective cyber response. An evaluation of our approach against some real-world APTs indicates that HOLMES can detect APT campaigns with high precision and low false alarm rate. The compact high-level graphs produced by HOLMES effectively summarizes an ongoing attack campaign and can assist real-time cyber-response operations.},
address = {San Francisco, CA, USA},
author = {Sadegh Momeni Milajerdi and Rigel Gjomemo and Birhanu Eshete and R. Sekar and V. N. Venkatakrishnan},
booktitle = {2019 {IEEE} Symposium on Security and Privacy, {SP} 2019, San Francisco, CA, USA, May 19-23, 2019},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:17:14 -0600},
doi = {10.1109/SP.2019.00026},
keywords = {alert correlation;intrusion detection;attack forensics; attack reconstruction; tactics techniques and procedures; advanced persistent threats},
month = {May},
pages = {1137--1152},
publisher = {{IEEE}},
series = {{IEEESP}'19},
title = {{HOLMES:} Real-Time {APT} Detection through Correlation of Suspicious Information Flows},
url = {https://doi.org/10.1109/SP.2019.00026},
year = 2019,
bdsk-url-1 = {https://dblp.org/rec/conf/sp/MilajerdiGESV19},
bdsk-url-2 = {https://doi.org/10.1109/SP.2019.00026},
}
|
|
[69]
|
ProPatrol: Attack Investigation via Extracted High-Level Tasks
(Sadegh M. Milajerdi, Birhanu Eshete, Rigel Gjomemo, V. N. Venkatakrishnan)
14th International Conference on Information Systems Security (Lecture Notes in Computer Science, ICISS'18), 11281, pp. 107–126
Abstract
Kernel audit logs are an invaluable source of information in the forensic investigation of a cyber-attack. However, the coarse granularity of dependency information in audit logs leads to the construction of huge attack graphs which contain false or inaccurate dependencies. To overcome this problem, we propose a system, called ProPatrol, which leverages the open compartmentalized design in families of enterprise applications used in security-sensitive contexts (e.g., browser, chat client, email client). To achieve its goal, ProPatrol infers a model for an application's high-level tasks as input-processing compartments using purely the audit log events generated by that application. The main benefit of this approach is that it does not rely on source code or binary instrumentation, but only on a preliminary and general knowledge of an application's architecture to bootstrap the analysis. Our experiments with enterprise-level attacks demonstrate that ProPatrol significantly cuts down the forensic investigation effort and quickly pinpoints the root-cause of attacks. ProPatrol incurs less than 2% runtime overhead on a commodity operating system.
►bibtex
PDF DOI: 10.1007/978-3-030-05171-6_6
@inproceedings{Milajerdi:iciss18,
abstract = {Kernel audit logs are an invaluable source of information in the forensic investigation of a cyber-attack. However, the coarse granularity of dependency information in audit logs leads to the construction of huge attack graphs which contain false or inaccurate dependencies. To overcome this problem, we propose a system, called ProPatrol, which leverages the open compartmentalized design in families of enterprise applications used in security-sensitive contexts (e.g., browser, chat client, email client). To achieve its goal, ProPatrol infers a model for an application's high-level tasks as input-processing compartments using purely the audit log events generated by that application. The main benefit of this approach is that it does not rely on source code or binary instrumentation, but only on a preliminary and general knowledge of an application's architecture to bootstrap the analysis. Our experiments with enterprise-level attacks demonstrate that ProPatrol significantly cuts down the forensic investigation effort and quickly pinpoints the root-cause of attacks. ProPatrol incurs less than 2% runtime overhead on a commodity operating system.},
address = {Bengaluru, India},
author = {Sadegh M. Milajerdi and Birhanu Eshete and Rigel Gjomemo and V. N. Venkatakrishnan},
booktitle = {14th International Conference on Information Systems Security},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:38:23 -0600},
doi = {10.1007/978-3-030-05171-6_6},
keywords = {attack reconstruction; provenance graphs; attack forensics; intrusion detection; alert correlation; advanced persistent threats},
month = {Dec},
pages = {107--126},
publisher = {Springer},
series = {Lecture Notes in Computer Science, {ICISS}'18},
title = {ProPatrol: Attack Investigation via Extracted High-Level Tasks},
url = {https://link.springer.com/chapter/10.1007/978-3-030-05171-6_6},
volume = 11281,
year = 2018,
bdsk-url-1 = {https://dblp.org/rec/conf/iciss/MilajerdiEGV18},
bdsk-url-2 = {https://doi.org/10.1007/978-3-030-05171-6_6},
bdsk-url-3 = {https://doi.org/10.1007/978-3-030-05171-6%5C_6},
}
|
|
[68]
|
NAVEX: Precise and Scalable Exploit Generation for Dynamic Web Applications
(Abeer Alhuzali, Rigel Gjomemo, Birhanu Eshete, V. N. Venkatakrishnan)
27th USENIX Security Symposium, USENIX Security 2018 (SEC'18), pp. 377–392 Distinguished Paper Award!!
Abstract
Modern multi-tier web applications are composed of several dynamic features, which make their vulnerability analysis challenging from a purely static analysis perspective. We describe an approach that overcomes the challenges posed by the dynamic nature of web applications. Our approach combines dynamic analysis that is guided by static analysis techniques in order to automatically identify vulnerabilities and build working exploits. Our approach is implemented and evaluated in NAVEX, a tool that can scale the process of automatic vulnerability analysis and exploit generation to large applications and to multiple classes of vulnerabilities. In our experiments, we were able to use NAVEX over a codebase of 3.2 million lines of PHP code, and construct 204 exploits in the code that was analyzed.
►bibtex
PDF
@inproceedings{Alhuzhali:usenixsec18,
abstract = {Modern multi-tier web applications are composed of several dynamic features, which make their vulnerability analysis challenging from a purely static analysis perspective. We describe an approach that overcomes the challenges posed by the dynamic nature of web applications. Our approach combines dynamic analysis that is guided by static analysis techniques in order to automatically identify vulnerabilities and build working exploits. Our approach is implemented and evaluated in NAVEX, a tool that can scale the process of automatic vulnerability analysis and exploit generation to large applications and to multiple classes of vulnerabilities. In our experiments, we were able to use NAVEX over a codebase of 3.2 million lines of PHP code, and construct 204 exploits in the code that was analyzed.},
address = {Baltimore, MD, USA},
author = {Abeer Alhuzali and Rigel Gjomemo and Birhanu Eshete and V. N. Venkatakrishnan},
booktitle = {27th {USENIX} Security Symposium, {USENIX} Security 2018},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 22:13:57 -0600},
keywords = {web application security; static analysis; exploit generation; PHP analysis; vulnerability analysis},
month = {Aug},
note = {Distinguished Paper Award!!},
pages = {377--392},
publisher = {{USENIX} Association},
series = {{SEC}'18},
title = {{NAVEX:} Precise and Scalable Exploit Generation for Dynamic Web Applications},
url = {https://www.usenix.org/conference/usenixsecurity18/presentation/alhuzali},
year = 2018,
bdsk-url-1 = {https://dblp.org/rec/conf/uss/AlhuzaliGEV18},
bdsk-url-2 = {https://www.usenix.org/conference/usenixsecurity18/presentation/alhuzali},
}
|
|
[67]
|
SLEUTH: Real-time Attack Scenario Reconstruction from COTS Audit Data
(Md. Nahid Hossain, Sadegh M. Milajerdi, Junao Wang, Birhanu Eshete, Rigel Gjomemo, R. Sekar, Scott D. Stoller, V. N. Venkatakrishnan)
26th USENIX Security Symposium, USENIX Security (SEC'17), pp. 487–504 85 out of 522 submissions, Acceptrance rate 17%
Abstract
We present an approach and system for real-time reconstruction of attack scenarios on an enterprise host. To meet the scalability and real-time needs of the problem, we develop a platform-neutral, main-memory based, dependency graph abstraction of audit-log data. We then present efficient, tag-based techniques for attack detection and reconstruction, including source identification and impact analysis. We also develop methods to reveal the big picture of attacks by construction of compact, visual graphs of attack steps. Our system participated in a red team evaluation organized by DARPA and was able to successfully detect and reconstruct the details of the red team's attacks on hosts running Windows, FreeBSD and Linux.
►bibtex
PDF
@inproceedings{Hossain:usenixsec17,
abstract = {We present an approach and system for real-time reconstruction of attack scenarios on an enterprise host. To meet the scalability and real-time needs of the problem, we develop a platform-neutral, main-memory based, dependency graph abstraction of audit-log data. We then present efficient, tag-based techniques for attack detection and reconstruction, including source identification and impact analysis. We also develop methods to reveal the big picture of attacks by construction of compact, visual graphs of attack steps. Our system participated in a red team evaluation organized by DARPA and was able to successfully detect and reconstruct the details of the red team's attacks on hosts running Windows, FreeBSD and Linux.},
address = {Vancouver, BC, Canada},
annote = {85 out of 522 submissions, Acceptrance rate 17%},
author = {Md. Nahid Hossain and Sadegh M. Milajerdi and Junao Wang and Birhanu Eshete and Rigel Gjomemo and R. Sekar and Scott D. Stoller and V. N. Venkatakrishnan},
booktitle = {26th {USENIX} Security Symposium, {USENIX} Security},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 22:58:12 -0600},
keywords = {attack reconstruction; provenance graphs; attack forensics; intrusion detection; alert correlation; advanced persistent threats},
month = {Aug},
pages = {487--504},
publisher = {{USENIX} Association},
series = {{SEC}'17},
title = {{SLEUTH:} Real-time Attack Scenario Reconstruction from {COTS} Audit Data},
url = {https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/hossain},
year = 2017,
bdsk-url-1 = {https://dblp.org/rec/conf/uss/HossainMWEGSSV17},
bdsk-url-2 = {https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/hossain},
}
|
|
[66]
|
DynaMiner: Leveraging Offline Infection Analytics for On-the-Wire Malware Detection
(Birhanu Eshete, V. N. Venkatakrishnan)
47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN'17), pp. 463–474 49 out of 220 submissions, Acceptance rate 22%
Abstract
Web-borne malware continues to be a major threat on the Web. At the core of malware infection are for-crime toolkits that exploit vulnerabilities in browsers and their extensions. When a victim host gets infected, the infection dynamics is often buried in benign traffic, which makes the task of inferring malicious behavior a non-trivial exercise. In this paper, we leverage web conversation graph analytics to tap into the rich dynamics of the interaction between a victim and malicious host(s) without the need for analyzing exploit payload. Based on insights derived from infection graph analytics, we formulate the malware detection challenge as a graph-analytics based learning problem. The key insight of our approach is the payload-agnostic abstraction and comprehensive analytics of malware infection dynamics pre-, during-, and post-infection. Our technique leverages 3 years of infection intelligence spanning 9 popular exploit kit families. Our approach is implemented in a tool called DynaMiner and evaluated on infection and benign HTTP traffic. DynaMiner achieves a 97.3% true positive rate with false positive rate of 1.5%. Our forensic and live case studies suggest the effectiveness of comprehensive graph abstraction malware infection. In some instances, DynaMiner detected unknown malware 11 days earlier than existing AV engines.
►bibtex
PDF DOI: 10.1109/DSN.2017.54
@inproceedings{Eshete:dsn17,
abstract = {Web-borne malware continues to be a major threat on the Web. At the core of malware infection are for-crime toolkits that exploit vulnerabilities in browsers and their extensions. When a victim host gets infected, the infection dynamics is often buried in benign traffic, which makes the task of inferring malicious behavior a non-trivial exercise. In this paper, we leverage web conversation graph analytics to tap into the rich dynamics of the interaction between a victim and malicious host(s) without the need for analyzing exploit payload. Based on insights derived from infection graph analytics, we formulate the malware detection challenge as a graph-analytics based learning problem. The key insight of our approach is the payload-agnostic abstraction and comprehensive analytics of malware infection dynamics pre-, during-, and post-infection. Our technique leverages 3 years of infection intelligence spanning 9 popular exploit kit families. Our approach is implemented in a tool called DynaMiner and evaluated on infection and benign HTTP traffic. DynaMiner achieves a 97.3% true positive rate with false positive rate of 1.5%. Our forensic and live case studies suggest the effectiveness of comprehensive graph abstraction malware infection. In some instances, DynaMiner detected unknown malware 11 days earlier than existing AV engines.},
address = {Denver, CO, USA},
annote = {49 out of 220 submissions, Acceptance rate 22%},
author = {Birhanu Eshete and V. N. Venkatakrishnan},
bibsource = {dblp computer science bibliography, https://dblp.org},
booktitle = {47th Annual {IEEE/IFIP} International Conference on Dependable Systems and Networks},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 08:35:08 -0600},
doi = {10.1109/DSN.2017.54},
keywords = {malware; intrusion detection; web security},
month = {June},
pages = {463--474},
publisher = {{IEEE} Computer Society},
series = {DSN'17},
title = {DynaMiner: Leveraging Offline Infection Analytics for On-the-Wire Malware Detection},
url = {https://doi.org/10.1109/DSN.2017.54},
year = 2017,
bdsk-url-1 = {https://dblp.org/rec/conf/dsn/EsheteV17},
bdsk-url-2 = {https://doi.org/10.1109/DSN.2017.54},
}
|
|
[65]
|
Chainsaw: Chained Automated Workflow-based Exploit Generation
(Abeer Alhuzali, Birhanu Eshete, Rigel Gjomemo, V. N. Venkatakrishnan)
Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS'16), pp. 641–652 Acceptance Rate: 16%
Abstract
We tackle the problem of automated exploit generation for web applications. In this regard, we present an approach that significantly improves the state-of-art in web injection vulnerability identification and exploit generation. Our approach for exploit generation tackles various challenges associated with typical web application characteristics: their multi-module nature, interposed user input, and multi-tier architectures using a database backend. Our approach develops precise models of application workflows, database schemas, and native functions to achieve high quality exploit generation. We implemented our approach in a tool called Chainsaw. Chainsaw was used to analyze 9 open source applications and generated over 199 first- and second-order injection exploits combined, significantly outperforming several related approaches.
►bibtex
PDF DOI: 10.1145/2976749.2978380
@inproceedings{Alhuzhali:ccs16,
abstract = {We tackle the problem of automated exploit generation for web applications. In this regard, we present an approach that significantly improves the state-of-art in web injection vulnerability identification and exploit generation. Our approach for exploit generation tackles various challenges associated with typical web application characteristics: their multi-module nature, interposed user input, and multi-tier architectures using a database backend. Our approach develops precise models of application workflows, database schemas, and native functions to achieve high quality exploit generation. We implemented our approach in a tool called Chainsaw. Chainsaw was used to analyze 9 open source applications and generated over 199 first- and second-order injection exploits combined, significantly outperforming several related approaches.},
address = {Vienna, Austria},
annote = {Acceptance Rate: 16%},
author = {Abeer Alhuzali and Birhanu Eshete and Rigel Gjomemo and V. N. Venkatakrishnan},
booktitle = {Proceedings of the 2016 {ACM} {SIGSAC} Conference on Computer and Communications Security},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 22:58:43 -0600},
doi = {10.1145/2976749.2978380},
keywords = {web application security;vulnerability analysis;exploit generation; static analysis},
month = {Oct},
pages = {641--652},
publisher = {{ACM}},
series = {{CCS}'16},
title = {Chainsaw: Chained Automated Workflow-based Exploit Generation},
url = {https://doi.org/10.1145/2976749.2978380},
year = 2016,
bdsk-url-1 = {https://dblp.org/rec/conf/ccs/AlhuzaliEGV16},
bdsk-url-2 = {https://doi.org/10.1145/2976749.2978380},
}
|
|
[64]
|
Leveraging Static Analysis Tools for Improving Usability of Memory Error Sanitization Compilers
(Rigel Gjomemo, Phu H. Phung, Edmund Ballou, Kedar S. Namjoshi, V. N. Venkatakrishnan, Lenore D. Zuck)
2016 IEEE International Conference on Software Quality, Reliability and Security (QRS'16), pp. 323–334 Acceptance Rate 29% Best Paper Award!
Abstract
Memory errors such as buffer overruns are notorious security vulnerabilities. There has been considerable interest in having a compiler to ensure the safety of compiled code either through static verification or through instrumented runtime checks. While certifying compilation has shown much promise, it has not been practical, leaving code instrumentation as the next best strategy for compilation. We term such compilers Memory Error Sanitization Compilers (MESCs). MESCs are available as part of GCC, LLVM and MSVC suites. Due to practical limitations, MESCs typically apply instrumentation indiscriminately to every memory access, and are consequently prohibitively expensive and practical to only small code bases. This work proposes a methodology that applies state-of-the-art static analysis techniques to eliminate unnecessary runtime checks, resulting in more efficient and scalable defenses. The methodology was implemented on LLVM's Safecode, Integer Overflow, and Address Sanitizer passes, using static analysis of Frama-C and Codesurfer. The benchmarks demonstrate an improvement in runtime performance that makes incorporation of runtime checks a viable option for defenses.
►bibtex
PDF DOI: 10.1109/QRS.2016.44
@inproceedings{Gjomemo:qrs16,
abstract = {Memory errors such as buffer overruns are notorious security vulnerabilities. There has been considerable interest in having a compiler to ensure the safety of compiled code either through static verification or through instrumented runtime checks. While certifying compilation has shown much promise, it has not been practical, leaving code instrumentation as the next best strategy for compilation. We term such compilers Memory Error Sanitization Compilers (MESCs). MESCs are available as part of GCC, LLVM and MSVC suites. Due to practical limitations, MESCs typically apply instrumentation indiscriminately to every memory access, and are consequently prohibitively expensive and practical to only small code bases. This work proposes a methodology that applies state-of-the-art static analysis techniques to eliminate unnecessary runtime checks, resulting in more efficient and scalable defenses. The methodology was implemented on LLVM's Safecode, Integer Overflow, and Address Sanitizer passes, using static analysis of Frama-C and Codesurfer. The benchmarks demonstrate an improvement in runtime performance that makes incorporation of runtime checks a viable option for defenses.},
address = {Vienna, Austria},
annote = {Acceptance Rate 29%},
author = {Rigel Gjomemo and Phu H. Phung and Edmund Ballou and Kedar S. Namjoshi and V. N. Venkatakrishnan and Lenore D. Zuck},
booktitle = {2016 {IEEE} International Conference on Software Quality, Reliability and Security},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 22:07:17 -0600},
doi = {10.1109/QRS.2016.44},
keywords = {static analysis; Program analysis; memory safety; compilers; C language; LLVM;},
month = {Aug},
note = {Best Paper Award!},
pages = {323--334},
publisher = {{IEEE}},
series = {{QRS}'16},
title = {Leveraging Static Analysis Tools for Improving Usability of Memory Error Sanitization Compilers},
url = {https://doi.org/10.1109/QRS.2016.44},
year = 2016,
bdsk-url-1 = {https://dblp.org/rec/conf/qrs/GjomemoPBNVZ16},
bdsk-url-2 = {https://doi.org/10.1109/QRS.2016.44},
}
|
|
[63]
|
Patching Logic Vulnerabilities for Web Applications using LogicPatcher
(Maliheh Monshizadeh, Prasad Naldurg, V. N. Venkatakrishnan)
Proceedings of the Sixth ACM on Conference on Data and Application Security and Privacy, CODASPY 2016, New Orleans, LA, USA, March 9-11, 2016 (CODASPY'16), pp. 73–84
Abstract
Logic vulnerabilities are an important class of programming flaws in web applications. These vulnerabilities occur when a desired property pertaining to an application's logic does not hold along certain paths in the application's code. Many analysis tools have been developed to find logic vulnerabilities in web applications. Given a web application with logic vulnerabilities, the question is whether one can design methods to patch application code and prevent these vulnerabilities from being exploited. We answer this question by developing an approach and tool called LogicPatcher for patching of logic vulnerabilities. We focus on correct patch placement, i.e. identifying the precise location in code where the patch code can be introduced, based on path profiling. As we show in this paper, finding the appropriate location as well as generating the right patch can get complicated and require deep code analysis. We demonstrate the utility of LogicPatcher by automatically fixing several critical parameter tampering and authorization vulnerabilities in large web applications.
►bibtex
PDF DOI: 10.1145/2857705.2857727
@inproceedings{Monshizadeh:codaspy16,
abstract = {Logic vulnerabilities are an important class of programming flaws in web applications. These vulnerabilities occur when a desired property pertaining to an application's logic does not hold along certain paths in the application's code. Many analysis tools have been developed to find logic vulnerabilities in web applications. Given a web application with logic vulnerabilities, the question is whether one can design methods to patch application code and prevent these vulnerabilities from being exploited. We answer this question by developing an approach and tool called LogicPatcher for patching of logic vulnerabilities. We focus on correct patch placement, i.e. identifying the precise location in code where the patch code can be introduced, based on path profiling. As we show in this paper, finding the appropriate location as well as generating the right patch can get complicated and require deep code analysis. We demonstrate the utility of LogicPatcher by automatically fixing several critical parameter tampering and authorization vulnerabilities in large web applications.},
address = {New Orleans, Louisiana, USA},
author = {Maliheh Monshizadeh and Prasad Naldurg and V. N. Venkatakrishnan},
booktitle = {Proceedings of the Sixth {ACM} on Conference on Data and Application Security and Privacy, {CODASPY} 2016, New Orleans, LA, USA, March 9-11, 2016},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 22:59:16 -0600},
doi = {10.1145/2857705.2857727},
keywords = {web application security; vulnerability analysis; logic vulnerabilities; Program analysis; Code retrofitting; patching},
month = {Mar},
pages = {73--84},
publisher = {{ACM}},
series = {{CODASPY}'16},
title = {Patching Logic Vulnerabilities for Web Applications using LogicPatcher},
url = {https://doi.org/10.1145/2857705.2857727},
year = 2016,
bdsk-url-1 = {https://dblp.org/rec/conf/codaspy/MonshizadehNV16},
bdsk-url-2 = {https://doi.org/10.1145/2857705.2857727},
}
|
|
[62]
|
Between Worlds: Securing Mixed JavaScript/ActionScript Multi-Party Web Content
(Phu H. Phung, Maliheh Monshizadeh, Meera Sridhar, Kevin W. Hamlen, V. N. Venkatakrishnan)
IEEE Transactions on Dependable and Secure Computing (TDSC'15), 12, pp. 443–457
Abstract
Mixed Flash and JavaScript content has become increasingly prevalent; its purveyance of dynamic features unique to each platform has popularized it for myriad Web development projects. Although Flash and JavaScript security has been examined extensively, the security of untrusted content that combines both has received considerably less attention. This article considers this fusion in detail, outlining several practical scenarios that threaten the security of Web applications. The severity of these attacks warrants the development of new techniques that address the security of Flash-JavaScript content considered as a whole, in contrast to prior solutions that have examined Flash or JavaScript security individually. Toward this end, the article presents FlashJaX, a cross-platform solution that enforces fine-grained, history-based policies that span both Flash and JavaScript. Using in-lined reference monitoring, FlashJaX safely embeds untrusted JavaScript and Flash content in Web pages without modifying browser clients or using special plug-ins. The architecture of FlashJaX, its design and implementation, and a detailed security analysis are exposited. Experiments with advertisements from popular ad networks demonstrate that FlashJaX is transparent to policy-compliant advertisement content, yet blocks many common attack vectors that exploit the fusion of these Web platforms.
►bibtex
PDF DOI: 10.1109/TDSC.2014.2355847
@article{Phung:tdsc15,
abstract = {Mixed Flash and JavaScript content has become increasingly prevalent; its purveyance of dynamic features unique to each platform has popularized it for myriad Web development projects. Although Flash and JavaScript security has been examined extensively, the security of untrusted content that combines both has received considerably less attention. This article considers this fusion in detail, outlining several practical scenarios that threaten the security of Web applications. The severity of these attacks warrants the development of new techniques that address the security of Flash-JavaScript content considered as a whole, in contrast to prior solutions that have examined Flash or JavaScript security individually. Toward this end, the article presents FlashJaX, a cross-platform solution that enforces fine-grained, history-based policies that span both Flash and JavaScript. Using in-lined reference monitoring, FlashJaX safely embeds untrusted JavaScript and Flash content in Web pages without modifying browser clients or using special plug-ins. The architecture of FlashJaX, its design and implementation, and a detailed security analysis are exposited. Experiments with advertisements from popular ad networks demonstrate that FlashJaX is transparent to policy-compliant advertisement content, yet blocks many common attack vectors that exploit the fusion of these Web platforms.},
author = {Phu H. Phung and Maliheh Monshizadeh and Meera Sridhar and Kevin W. Hamlen and V. N. Venkatakrishnan},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:17:46 -0600},
doi = {10.1109/TDSC.2014.2355847},
journal = {{IEEE} Transactions on Dependable and Secure Computing},
keywords = {web application security; runtime monitoring; JavaScript; access control; Action Script; Flash; web advertising},
month = {Jul},
number = 4,
pages = {443--457},
series = {{TDSC}'15},
title = {Between Worlds: Securing Mixed JavaScript/ActionScript Multi-Party Web Content},
url = {https://doi.org/10.1109/TDSC.2014.2355847},
volume = 12,
year = 2015,
bdsk-url-1 = {https://dblp.org/rec/journals/tdsc/PhungMSHV15},
bdsk-url-2 = {https://doi.org/10.1109/TDSC.2014.2355847},
}
|
|
[61]
|
Static Detection and Automatic Exploitation of Intent Message Vulnerabilities in Android Applications
(Daniele Gallingani, Rigel Gjomemo, V. N. Venkatakrishnan, Stefano Zanero)
Proceedings of the 2015 IEEE Mobile Security Technologies (MoST) (MOST'15)
Abstract
Android's Inter-Component Communication (ICC) mechanism strongly relies on Intent messages. Unfortunately, due to the lack of message origin verification in Intents, implementing security policies based on message sources is hard in practice, and completely relies on the programmer's skill and attention. In this paper, we present a framework for automatically detecting Intent input validation vulnerabilities. We are thus able to highlight component fragments that expose vulnerable resources to possible malicious message senders. Most importantly, we advance the state of the art by developing a method to automatically demonstrate whether the identified vulnerabilities can be exploited or not, adopting a formal approach to automatically produce malicious payloads that can trigger dangerous behavior in vulnerable applications. We therefore eliminate the high rate of false positives common in previously applied methods. We test our methods on a representative sample of applications, and we find that 29 out of 64 tested applications are detected as potentially vulnerable, while 26 out of 29 can be automatically proven to be exploitable. Our experiments demonstrate the lack of exhaustive sanity checks when receiving messages from unknown sources, and stress the underestimation of this problem in real world application development.
►bibtex
PDF
@inproceedings{Gallingani:most15,
abstract = {Android's Inter-Component Communication (ICC) mechanism strongly relies on Intent messages. Unfortunately, due to the lack of message origin verification in Intents, implementing security policies based on message sources is hard in practice, and completely relies on the programmer's skill and attention. In this paper, we present a framework for automatically detecting Intent input validation vulnerabilities. We are thus able to highlight component fragments that expose vulnerable resources to possible malicious message senders. Most importantly, we advance the state of the art by developing a method to automatically demonstrate whether the identified vulnerabilities can be exploited or not, adopting a formal approach to automatically produce malicious payloads that can trigger dangerous behavior in vulnerable applications. We therefore eliminate the high rate of false positives common in previously applied methods. We test our methods on a representative sample of applications, and we find that 29 out of 64 tested applications are detected as potentially vulnerable, while 26 out of 29 can be automatically proven to be exploitable. Our experiments demonstrate the lack of exhaustive sanity checks when receiving messages from unknown sources, and stress the underestimation of this problem in real world application development.},
address = {San Jose, CA, USA},
author = {Daniele Gallingani and Rigel Gjomemo and V. N. Venkatakrishnan and Stefano Zanero},
booktitle = {Proceedings of the 2015 IEEE Mobile Security Technologies (MoST)},
date-added = {2026-02-15 18:01:58 -0600},
date-modified = {2026-02-16 23:18:01 -0600},
keywords = {mobile security;Android Security;vulnerability analysis;attacks},
month = {May},
organization = {IEEE},
series = {{MOST}'15},
title = {Static Detection and Automatic Exploitation of Intent Message Vulnerabilities in Android Applications},
url = {https://www.ieee-security.org/TC/SPW2015/MoST/papers/s3p1.pdf},
year = 2015,
bdsk-url-1 = {https://www.ieee-security.org/TC/SPW2015/MoST/papers/s3p1.pdf},
}
|
|
[60]
|
Vetting SSL Usage in Applications with SSLINT
(Boyuan He, Vaibhav Rastogi, Yinzhi Cao, Yan Chen, V. N. Venkatakrishnan, Runqing Yang, Zhenrui Zhang)
2015 IEEE Symposium on Security and Privacy, SP 2015, San Jose, CA, USA, May 17-21, 2015 (IEEESP'15), pp. 519–534 55 papers accepted out of 407, 13.5%
Abstract
Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols have become the security backbone of the Web and Internet today. Many systems including mobile and desktop applications are protected by SSL/TLS protocols against network attacks. However, many vulnerabilities caused by incorrect use of SSL/TLS APIs have been uncovered in recent years. Such vulnerabilities, many of which are caused due to poor API design and inexperience of application developers, often lead to confidential data leakage or man-in-the-middle attacks. In this paper, to guarantee code quality and logic correctness of SSL/TLS applications, we design and implement SSLINT, a scalable, automated, static analysis system for detecting incorrect use of SSL/TLS APIs. SSLINT is capable of performing automatic logic verification with high efficiency and good accuracy. To demonstrate it, we apply SSLINT to one of the most popular Linux distributions -- Ubuntu. We find 27 previously unknown SSL/TLS vulnerabilities in Ubuntu applications, most of which are also distributed with other Linux distributions.
►bibtex
PDF DOI: 10.1109/SP.2015.38
@inproceedings{He:ieeesp15,
abstract = {Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols have become the security backbone of the Web and Internet today. Many systems including mobile and desktop applications are protected by SSL/TLS protocols against network attacks. However, many vulnerabilities caused by incorrect use of SSL/TLS APIs have been uncovered in recent years. Such vulnerabilities, many of which are caused due to poor API design and inexperience of application developers, often lead to confidential data leakage or man-in-the-middle attacks. In this paper, to guarantee code quality and logic correctness of SSL/TLS applications, we design and implement SSLINT, a scalable, automated, static analysis system for detecting incorrect use of SSL/TLS APIs. SSLINT is capable of performing automatic logic verification with high efficiency and good accuracy. To demonstrate it, we apply SSLINT to one of the most popular Linux distributions -- Ubuntu. We find 27 previously unknown SSL/TLS vulnerabilities in Ubuntu applications, most of which are also distributed with other Linux distributions.},
address = {San Jose, CA, USA},
annote = {55 papers accepted out of 407, 13.5%},
author = {Boyuan He and Vaibhav Rastogi and Yinzhi Cao and Yan Chen and V. N. Venkatakrishnan and Runqing Yang and Zhenrui Zhang},
booktitle = {2015 {IEEE} Symposium on Security and Privacy, {SP} 2015, San Jose, CA, USA, May 17-21, 2015},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:02:43 -0600},
doi = {10.1109/SP.2015.38},
keywords = {web security;static analysis; C language; vulnerability analysis;},
month = {May},
pages = {519--534},
publisher = {{IEEE} Computer Society},
series = {{IEEESP}'15},
title = {Vetting {SSL} Usage in Applications with {SSLINT}},
url = {https://doi.org/10.1109/SP.2015.38},
year = 2015,
bdsk-url-1 = {https://dblp.org/rec/conf/sp/HeRCCVYZ15},
bdsk-url-2 = {https://doi.org/10.1109/SP.2015.38},
}
|
|
[59]
|
Practical Exploit Generation for Intent Message Vulnerabilities in Android (Refereed Poster)
(Daniele Gallingani, Rigel Gjomemo, V. N. Venkatakrishnan, Stefano Zanero)
Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, CODASPY 2015, San Antonio, TX, USA, March 2-4, 2015 (CODASPY'15), pp. 155–157
Abstract
Android's Inter-Component Communication (ICC) mechanism strongly relies on Intent messages. Unfortunately, due to the lack of message origin verification in Intents, application security completely relies on the programmer's skill and attention. In this paper, we advance the state of the art by developing a method to automatically detect potential vulnerabilities and, most importantly, demonstrate whether they can be exploited or not. To this end, we adopt a formal approach to automatically produce malicious payloads that can trigger dangerous behavior in vulnerable applications. We test our methods on a representative sample of applications, and we find that 29 out of 64 tested applications are potentially vulnerable, while 26 of them are automatically proven to be exploitable.
►bibtex
PDF DOI: 10.1145/2699026.2699132
@inproceedings{Gallingani:codaspy15,
abstract = {Android's Inter-Component Communication (ICC) mechanism strongly relies on Intent messages. Unfortunately, due to the lack of message origin verification in Intents, application security completely relies on the programmer's skill and attention. In this paper, we advance the state of the art by developing a method to automatically detect potential vulnerabilities and, most importantly, demonstrate whether they can be exploited or not. To this end, we adopt a formal approach to automatically produce malicious payloads that can trigger dangerous behavior in vulnerable applications. We test our methods on a representative sample of applications, and we find that 29 out of 64 tested applications are potentially vulnerable, while 26 of them are automatically proven to be exploitable.},
author = {Daniele Gallingani and Rigel Gjomemo and V. N. Venkatakrishnan and Stefano Zanero},
booktitle = {Proceedings of the 5th {ACM} Conference on Data and Application Security and Privacy, {CODASPY} 2015, San Antonio, TX, USA, March 2-4, 2015},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:00:47 -0600},
doi = {10.1145/2699026.2699132},
keywords = {Android Security; mobile security; vulnerability analysis; attacks},
month = {Mar},
pages = {155--157},
publisher = {{ACM}},
series = {{CODASPY}'15},
title = {Practical Exploit Generation for Intent Message Vulnerabilities in Android (Refereed Poster)},
url = {https://doi.org/10.1145/2699026.2699132},
year = 2015,
bdsk-url-1 = {https://dblp.org/rec/conf/codaspy/GallinganiGVZ15},
bdsk-url-2 = {https://doi.org/10.1145/2699026.2699132},
}
|
|
[58]
|
EKHunter: A Counter-Offensive Toolkit for Exploit Kit Infiltration
(Birhanu Eshete, Abeer Alhuzali, Maliheh Monshizadeh, Phillip A. Porras, V. N. Venkatakrishnan, Vinod Yegneswaran)
22nd Annual Network and Distributed System Security Symposium (NDSS'15) 50 publications accepted out of 313, 15.9%
Abstract
The emergence of exploit kits is one of the most important developments in modern cybercrime. Much of cybersecurity research in the recent years has been devoted towards defending citizens from harm delivered through exploit kits. In this paper, we examine an alternate, counter-offensive strategy towards combating cybercrime launched through exploit kits. Towards this goal, we survey a wide range of 30 real-world exploit kits and analyze a counter-offensive adversarial model against the kits and kit operator. Guided by our analysis, we present a systematic methodology for examining a given kit to determine where vulnerabilities may reside within its server side implementation. In our experiments, we found over 180 vulnerabilities among 16 exploit kits of those surveyed, and were able to automatically synthesize exploits for infiltrating 6 of them. The results validate our hypothesis that exploit kits largely lack sophistication necessary to resist counter-offensive activities. We then propose the design of EKHUNTER, a system that is capable of automatically detecting the presence of exploit vulnerabilities and deriving laboratory test cases that can compromise both the integrity of a fielded exploit kit, and even the identity of the kit operator
►bibtex
PDF
@inproceedings{Eshete:ndss15,
abstract = {The emergence of exploit kits is one of the most important developments in modern cybercrime. Much of cybersecurity research in the recent years has been devoted towards defending citizens from harm delivered through exploit kits. In this paper, we examine an alternate, counter-offensive strategy towards combating cybercrime launched through exploit kits. Towards this goal, we survey a wide range of 30 real-world exploit kits and analyze a counter-offensive adversarial model against the kits and kit operator. Guided by our analysis, we present a systematic methodology for examining a given kit to determine where vulnerabilities may reside within its server side implementation. In our experiments, we found over 180 vulnerabilities among 16 exploit kits of those surveyed, and were able to automatically synthesize exploits for infiltrating 6 of them. The results validate our hypothesis that exploit kits largely lack sophistication necessary to resist counter-offensive activities. We then propose the design of EKHUNTER, a system that is capable of automatically detecting the presence of exploit vulnerabilities and deriving laboratory test cases that can compromise both the integrity of a fielded exploit kit, and even the identity of the kit operator},
address = {San Diego, California, USA},
annote = {50 publications accepted out of 313, 15.9%},
author = {Birhanu Eshete and Abeer Alhuzali and Maliheh Monshizadeh and Phillip A. Porras and V. N. Venkatakrishnan and Vinod Yegneswaran},
booktitle = {22nd Annual Network and Distributed System Security Symposium},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:00:59 -0600},
keywords = {exploit kits; malware; web security; attacks; cybercrime},
month = {Feb},
publisher = {The Internet Society},
series = {{NDSS}'15},
title = {EKHunter: {A} Counter-Offensive Toolkit for Exploit Kit Infiltration},
url = {https://www.ndss-symposium.org/ndss2015/ekhunter-counter-offensive-toolkit-exploit-kit-infiltration},
year = 2015,
bdsk-url-1 = {https://dblp.org/rec/conf/ndss/EsheteAMPVY15},
bdsk-url-2 = {https://www.ndss-symposium.org/ndss2015/ekhunter-counter-offensive-toolkit-exploit-kit-infiltration},
}
|
|
[57]
|
From Verification to Optimizations
(Rigel Gjomemo, Kedar S. Namjoshi, Phu H. Phung, V. N. Venkatakrishnan, Lenore D. Zuck)
Proceedings of the 16th International Conference on Verification, Model Checking, and Abstract Interpretation (Lecture Notes in Computer Science, VMCAI'15), 8931, pp. 300–317
Abstract
Compilers perform a static analysis of a program prior to optimization. The precision of this analysis is limited, however, by strict time budgets for compilation. We explore an alternative, new approach, which links external sound static analysis tools into compilers. One of the key problems to be solved is that of propagating the source-level information gathered by a static analyzer deeper into the optimization pipeline. We propose a method to achieve this, and demonstrate its feasibility through an implementation using the LLVM compiler infrastructure. We show how assertions obtained from the Frama-C source code analysis platform are propagated through LLVM and are then used to substantially improve the effectiveness of several optimizations.
►bibtex
PDF DOI: 10.1007/978-3-662-46081-8_17
@inproceedings{Gjomemo:vmcai15,
abstract = {Compilers perform a static analysis of a program prior to optimization. The precision of this analysis is limited, however, by strict time budgets for compilation. We explore an alternative, new approach, which links external sound static analysis tools into compilers. One of the key problems to be solved is that of propagating the source-level information gathered by a static analyzer deeper into the optimization pipeline. We propose a method to achieve this, and demonstrate its feasibility through an implementation using the LLVM compiler infrastructure. We show how assertions obtained from the Frama-C source code analysis platform are propagated through LLVM and are then used to substantially improve the effectiveness of several optimizations.},
address = {Mumbai, india},
author = {Rigel Gjomemo and Kedar S. Namjoshi and Phu H. Phung and V. N. Venkatakrishnan and Lenore D. Zuck},
booktitle = {Proceedings of the 16th International Conference on Verification, Model Checking, and Abstract Interpretation},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:41:28 -0600},
doi = {10.1007/978-3-662-46081-8_17},
keywords = {compilers; verification; formal methods; Program analysis},
month = {Jan},
pages = {300--317},
publisher = {Springer},
series = {Lecture Notes in Computer Science, VMCAI'15},
title = {From Verification to Optimizations},
url = {https://doi.org/10.1007/978-3-662-46081-8_17},
volume = 8931,
year = 2015,
bdsk-url-1 = {https://dblp.org/rec/conf/vmcai/GjomemoNPVZ15},
bdsk-url-2 = {https://doi.org/10.1007/978-3-662-46081-8_17},
bdsk-url-3 = {https://doi.org/10.1007/978-3-662-46081-8%5C_17},
}
|
|
[56]
|
MACE: Detecting Privilege Escalation Vulnerabilities in Web Applications
(Maliheh Monshizadeh, Prasad Naldurg, V. N. Venkatakrishnan)
Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Scottsdale, AZ, USA, November 3-7, 2014 (CCS'14), pp. 690–701
Abstract
►bibtex
PDF DOI: 10.1145/2660267.2660337
@inproceedings{Monshizadeh:ccs14,
address = {Scottsdale, AZ, USA},
author = {Maliheh Monshizadeh and Prasad Naldurg and V. N. Venkatakrishnan},
booktitle = {Proceedings of the 2014 {ACM} {SIGSAC} Conference on Computer and Communications Security, Scottsdale, AZ, USA, November 3-7, 2014},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:19:35 -0600},
doi = {10.1145/2660267.2660337},
editor = {Gail{-}Joon Ahn and Moti Yung and Ninghui Li},
keywords = {web application security; access control; vulnerability analysis},
month = {Nov},
pages = {690--701},
publisher = {{ACM}},
series = {{CCS}'14},
title = {{MACE:} Detecting Privilege Escalation Vulnerabilities in Web Applications},
url = {https://doi.org/10.1145/2660267.2660337},
year = 2014,
bdsk-url-1 = {https://dblp.org/rec/conf/ccs/MonshizadehNV14},
bdsk-url-2 = {https://doi.org/10.1145/2660267.2660337},
}
|
|
[55]
|
A Threat Table Based Assessment of Information Security in Telemedicine
(John C. Pendergrass, Karen Heart, C. Ranganathan, V. N. Venkatakrishnan)
International Journal Health Information Systems Informatics (IJHISI), 9, pp. 20–31
Abstract
Information security within healthcare is paramount and telemedicine applications present unique security challenges. Technology is giving rise to new and advanced telemedicine applications and understanding the security threats to these applications is needed to ensure, among other things, the privacy of patient information. This paper presents a high level analysis of a telemedicine application in order to better understand the security threats to this unique and vulnerable environment. This risk analysis is performed using the concept of threat tables. This case study focuses on the capture and representation of salient security threats in telemedicine. To analyze the security threats to an application, we present a threat modeling framework utilizing a table driven approach. Our analysis reveals that even in a highly controlled environment with static locations, the security risks posed by telemedicine applications are significant, and that using a threat table approach provides an easy-to-use and effective method for managing these threats.
►bibtex
PDF DOI: 10.4018/IJHISI.2014100102
@article{Pendergrass:ijhisi14,
abstract = {Information security within healthcare is paramount and telemedicine applications present unique security challenges. Technology is giving rise to new and advanced telemedicine applications and understanding the security threats to these applications is needed to ensure, among other things, the privacy of patient information. This paper presents a high level analysis of a telemedicine application in order to better understand the security threats to this unique and vulnerable environment. This risk analysis is performed using the concept of threat tables. This case study focuses on the capture and representation of salient security threats in telemedicine. To analyze the security threats to an application, we present a threat modeling framework utilizing a table driven approach. Our analysis reveals that even in a highly controlled environment with static locations, the security risks posed by telemedicine applications are significant, and that using a threat table approach provides an easy-to-use and effective method for managing these threats.},
author = {John C. Pendergrass and Karen Heart and C. Ranganathan and V. N. Venkatakrishnan},
date-added = {2026-02-15 21:07:05 -0600},
date-modified = {2026-02-16 23:20:09 -0600},
doi = {10.4018/IJHISI.2014100102},
journal = {International Journal Health Information Systems Informatics},
keywords = {telemedicine; health; Security; Privacy},
month = {Oct},
number = 4,
pages = {20--31},
series = {{IJHISI}},
title = {A Threat Table Based Assessment of Information Security in Telemedicine},
url = {https://doi.org/10.4018/ijhisi.2014100102},
volume = 9,
year = 2014,
bdsk-url-1 = {https://dblp.org/rec/journals/ijhisi/PendergrassHRV14},
bdsk-url-2 = {https://doi.org/10.4018/ijhisi.2014100102},
}
|
|
[54]
|
PeerShark: flow-clustering and conversation-generation for malicious peer-to-peer traffic identification
(Pratik Narang, Chittaranjan Hota, V. N. Venkatakrishnan)
EURASIP Journal of Information Security (JIS), 2014, pp. 15
Abstract
The distributed and decentralized nature of peer-to-peer (P2P) networks has offered a lucrative alternative to bot-masters to build botnets. P2P botnets are not prone to any single point of failure and have been proven to be highly resilient against takedown attempts. Moreover, smarter bots are stealthy in their communication patterns and elude the standard discovery techniques which look for anomalous network or communication behavior. In this paper, we present a methodology to detect P2P botnet traffic and differentiate it from benign P2P traffic in a network. Our approach neither assumes the availability of any `seed' information of bots nor relies on deep packet inspection. It aims to detect the stealthy behavior of P2P botnets. That is, we aim to detect P2P botnets when they lie dormant (to evade detection by intrusion detection systems) or while they perform malicious activities (spamming, password stealing, etc.) in a manner which is not observable to a network administrator. Our approach PeerSharkPeerShark combines the benefits of flow-based and conversation-based approaches with a two-tier architecture, and addresses the limitations of these approaches. By extracting statistical features from the network traces of P2P applications and botnets, we build supervised machine learning models which can accurately differentiate between benign P2P applications and P2P botnets. PeerSharkPeerShark could also detect unknown P2P botnet traffic with high accuracy.
►bibtex
PDF DOI: 10.1186/S13635-014-0015-3
@article{Narang:jis14,
abstract = {The distributed and decentralized nature of peer-to-peer (P2P) networks has offered a lucrative alternative to bot-masters to build botnets. P2P botnets are not prone to any single point of failure and have been proven to be highly resilient against takedown attempts. Moreover, smarter bots are stealthy in their communication patterns and elude the standard discovery techniques which look for anomalous network or communication behavior. In this paper, we present a methodology to detect P2P botnet traffic and differentiate it from benign P2P traffic in a network. Our approach neither assumes the availability of any `seed' information of bots nor relies on deep packet inspection. It aims to detect the stealthy behavior of P2P botnets. That is, we aim to detect P2P botnets when they lie dormant (to evade detection by intrusion detection systems) or while they perform malicious activities (spamming, password stealing, etc.) in a manner which is not observable to a network administrator. Our approach PeerSharkPeerShark combines the benefits of flow-based and conversation-based approaches with a two-tier architecture, and addresses the limitations of these approaches. By extracting statistical features from the network traces of P2P applications and botnets, we build supervised machine learning models which can accurately differentiate between benign P2P applications and P2P botnets. PeerSharkPeerShark could also detect unknown P2P botnet traffic with high accuracy.},
author = {Pratik Narang and Chittaranjan Hota and V. N. Venkatakrishnan},
date-added = {2026-02-15 13:12:39 -0600},
date-modified = {2026-02-16 23:20:32 -0600},
doi = {10.1186/S13635-014-0015-3},
journal = {{EURASIP} Journal of Information Security},
keywords = {malware;botnets;cybercrime},
month = {Oct},
number = 15,
pages = 15,
series = {{JIS}},
title = {PeerShark: flow-clustering and conversation-generation for malicious peer-to-peer traffic identification},
url = {https://doi.org/10.1186/s13635-014-0015-3},
volume = 2014,
year = 2014,
bdsk-url-1 = {https://dblp.org/rec/journals/ejisec/NarangHV14},
bdsk-url-2 = {https://doi.org/10.1186/s13635-014-0015-3},
}
|
|
[53]
|
DEICS: Data Erasure in Concurrent Software
(Kalpana Gondi, A. Prasad Sistla, V. N. Venkatakrishnan)
19th Nordic Conference on Secure IT Systems (Lecture Notes in Computer Science, NordSec'14), 8788, pp. 42–58
Abstract
A well-known tenet for ensuring unauthorized leaks of sensitive data such as passwords and cryptographic keys is to erase (''zeroize'') them after their intended use in any program. Prior work on minimizing sensitive data lifetimes has focused exclusively on sequential programs. In this work, we address the problem of data lifetime minimization for concurrent programs. We develop a new algorithm that precisely anticipates when to introduce these erasures, and develop an implementation of this algorithm in a tool called DEICS. Through an experimental evaluation, we show that DEICS is able to reduce lifetimes of shared sensitive data in several concurrent applications (over 100k lines of code combined) with minimal performance overheads.
►bibtex
PDF DOI: 10.1007/978-3-319-11599-3_3
@inproceedings{Gondi:nordsec14,
abstract = {A well-known tenet for ensuring unauthorized leaks of sensitive data such as passwords and cryptographic keys is to erase (''zeroize'') them after their intended use in any program. Prior work on minimizing sensitive data lifetimes has focused exclusively on sequential programs. In this work, we address the problem of data lifetime minimization for concurrent programs. We develop a new algorithm that precisely anticipates when to introduce these erasures, and develop an implementation of this algorithm in a tool called DEICS. Through an experimental evaluation, we show that DEICS is able to reduce lifetimes of shared sensitive data in several concurrent applications (over 100k lines of code combined) with minimal performance overheads.},
address = {Troms{\o}, Norway},
author = {Kalpana Gondi and A. Prasad Sistla and V. N. Venkatakrishnan},
booktitle = {19th Nordic Conference on Secure {IT} Systems},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:42:15 -0600},
doi = {10.1007/978-3-319-11599-3_3},
keywords = {confidentiality; sensitive data leaks; verification;Program analysis;program transformation;Code retrofitting; concurrent applications},
month = {Oct},
pages = {42--58},
publisher = {Springer},
series = {Lecture Notes in Computer Science, NordSec'14},
title = {{DEICS:} Data Erasure in Concurrent Software},
url = {https://doi.org/10.1007/978-3-319-11599-3_3},
volume = 8788,
year = 2014,
bdsk-url-1 = {https://dblp.org/rec/conf/nordsec/GondiSV14},
bdsk-url-2 = {https://doi.org/10.1007/978-3-319-11599-3_3},
bdsk-url-3 = {https://doi.org/10.1007/978-3-319-11599-3%5C_3},
}
|
|
[52]
|
PeerShark: flow-clustering and conversation-generation for malicious peer-to-peer traffic identification
(Pratik Narang, Chittaranjan Hota, V. N. Venkatakrishnan)
International Workshop on Cyber Crime (IWCC'14), pp. 15
Abstract
The decentralized nature of Peer-to-Peer (P2P) botnets makes them difficult to detect. Their distributed nature also exhibits resilience against take-down attempts. Moreover, smarter bots are stealthy in their communication patterns, and elude the standard discovery techniques which look for anomalous network or communication behavior. In this paper, we propose PeerShark, a novel methodology to detect P2P botnet traffic and differentiate it from benign P2P traffic in a network. Instead of the traditional 5-tuple `flow-based' detection approach, we use a 2-tuple `conversation-based' approach which is port-oblivious, protocol-oblivious and does not require Deep Packet Inspection. PeerShark could also classify different P2P applications with an accuracy of more than 95%.
►bibtex
PDF
@inproceedings{Narang:IWCC14,
abstract = {The decentralized nature of Peer-to-Peer (P2P) botnets makes them difficult to detect. Their distributed nature also exhibits resilience against take-down attempts. Moreover, smarter bots are stealthy in their communication patterns, and elude the standard discovery techniques which look for anomalous network or communication behavior. In this paper, we propose PeerShark, a novel methodology to detect P2P botnet traffic and differentiate it from benign P2P traffic in a network. Instead of the traditional 5-tuple `flow-based' detection approach, we use a 2-tuple `conversation-based' approach which is port-oblivious, protocol-oblivious and does not require Deep Packet Inspection. PeerShark could also classify different P2P applications with an accuracy of more than 95%.},
address = {San Jose, CA, USA},
author = {Pratik Narang and Chittaranjan Hota and V. N. Venkatakrishnan},
booktitle = {International Workshop on Cyber Crime},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-15 17:22:58 -0600},
keywords = {malware;botnets;cybercrime},
month = {May},
organization = {{IEEE}},
pages = 15,
series = {IWCC'14},
title = {PeerShark: flow-clustering and conversation-generation for malicious peer-to-peer traffic identification},
url = {https://www.ieee-security.org/TC/SPW2014/papers/5103a108.PDF},
year = 2014,
bdsk-url-1 = {https://dblp.org/rec/journals/ejisec/NarangHV14},
bdsk-url-2 = {https://doi.org/10.1186/s13635-014-0015-3},
bdsk-url-3 = {https://www.ieee-security.org/TC/SPW2014/papers/5103a108.PDF},
}
|
|
[51]
|
Automated detection of parameter tampering opportunities and vulnerabilities in web applications
(Prithvi Bisht, Timothy L. Hinrichs, Nazari Skrupsky, V. N. Venkatakrishnan)
Journal of Computer Security (JCS), 22, pp. 415–465
Abstract
Parameter tampering attacks are dangerous to a web application whose server fails to replicate the validation of user-supplied data that is performed by the client in web forms. Malicious users who circumvent the client can capitalize on the missing server validation. In this paper, we provide a formal description of parameter tampering vulnerabilities and a high level approach for their detection. We specialize this high level approach to develop complementary detection solutions in two interesting settings: blackbox only analyze client-side code in web forms and whitebox also analyze server-side code that processes submitted web forms. This paper presents interesting challenges encountered in realizing the high level approach for each setting and novel technical contributions that address these challenges. We also contrast utility, difficulties and effectiveness issues in both settings and provide a quantitative comparison of results. Our experiments with real world and open source applications demonstrate that parameter tampering vulnerabilities are prolific total 47 in 9 applications, and their exploitation can have serious consequences including unauthorized transactions, account hijacking and financial losses. We conclude this paper with a discussion on countermeasures for parameter tampering attacks and present a detailed survey of existing defenses and their suitability.
►bibtex
PDF DOI: 10.3233/JCS-140498
@article{Bisht:JCS14,
abstract = {Parameter tampering attacks are dangerous to a web application whose server fails to replicate the validation of user-supplied data that is performed by the client in web forms. Malicious users who circumvent the client can capitalize on the missing server validation. In this paper, we provide a formal description of parameter tampering vulnerabilities and a high level approach for their detection. We specialize this high level approach to develop complementary detection solutions in two interesting settings: blackbox only analyze client-side code in web forms and whitebox also analyze server-side code that processes submitted web forms. This paper presents interesting challenges encountered in realizing the high level approach for each setting and novel technical contributions that address these challenges. We also contrast utility, difficulties and effectiveness issues in both settings and provide a quantitative comparison of results. Our experiments with real world and open source applications demonstrate that parameter tampering vulnerabilities are prolific total 47 in 9 applications, and their exploitation can have serious consequences including unauthorized transactions, account hijacking and financial losses. We conclude this paper with a discussion on countermeasures for parameter tampering attacks and present a detailed survey of existing defenses and their suitability.},
author = {Prithvi Bisht and Timothy L. Hinrichs and Nazari Skrupsky and V. N. Venkatakrishnan},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:20:48 -0600},
doi = {10.3233/JCS-140498},
journal = {Journal of Computer Security},
keywords = {parameter tampering; web application security; attacks; vulnerability analysis; symbolic evaluation},
month = {May},
number = 3,
pages = {415--465},
series = {{JCS}},
title = {Automated detection of parameter tampering opportunities and vulnerabilities in web applications},
url = {https://doi.org/10.3233/JCS-140498},
volume = 22,
year = 2014,
bdsk-url-1 = {https://dblp.org/rec/journals/jcs/BishtHSV14},
bdsk-url-2 = {https://doi.org/10.3233/JCS-140498},
}
|
|
[50]
|
Sensitivity of Information Disclosed in Amazon Reviews
(Federica Fornaciari, C. Ranganathan, V. N. Venkatakrishnan)
The Eighth International Conference on Digital Society (ICDS'14) Acceptance rate 28%
Abstract
As online product reviews become ubiquitous, more individuals increasingly write and rely on them. In an effort to share their experiences and opinions about a product, do individuals share private and sensitive information online? This study addresses this critical issue by examining the extent of sensitive information disclosed in Amazon.com's product reviews. We crawled Amazon.com and gathered all online reviews posted for six products that pertained to weight loss, anti-aging, sex-related, fragrance, baby care and electronic goods. This resulted in 3,485 reviews, which were textanalyzed and mined using Linguistic Inquiry and Word Count (LIWC) analysis. Then, data processed through LIWC were further analyzed through descriptive statistics and discriminant analysis. We found that Amazon's reviewers disclose high levels of sensitive information in the following categories: family, humans, positive emotions, negative emotions, sadness, cognitive mechanisms, concerns related to work, achievements, leisure and money. Sensitive disclosure is also found to be a function of the type of reviewer and of the anonymization strategies adopted.
►bibtex
PDF
@inproceedings{Fornaciari:icds14,
abstract = {As online product reviews become ubiquitous, more individuals increasingly write and rely on them. In an effort to share their experiences and opinions about a product, do individuals share private and sensitive information online? This study addresses this critical issue by examining the extent of sensitive information disclosed in Amazon.com's product reviews. We crawled Amazon.com and gathered all online reviews posted for six products that pertained to weight loss, anti-aging, sex-related, fragrance, baby care and electronic goods. This resulted in 3,485 reviews, which were textanalyzed and mined using Linguistic Inquiry and Word Count (LIWC) analysis. Then, data processed through LIWC were further analyzed through descriptive statistics and discriminant analysis. We found that Amazon's reviewers disclose high levels of sensitive information in the following categories: family, humans, positive emotions, negative emotions, sadness, cognitive mechanisms, concerns related to work, achievements, leisure and money. Sensitive disclosure is also found to be a function of the type of reviewer and of the anonymization strategies adopted.},
address = {Barcelona, Spain},
annote = {Acceptance rate 28%},
author = {Federica Fornaciari and C. Ranganathan and V. N. Venkatakrishnan},
booktitle = {The Eighth International Conference on Digital Society},
date-added = {2026-02-15 13:27:39 -0600},
date-modified = {2026-02-17 08:17:24 -0600},
keywords = {Privacy; information flow},
month = {Mar},
series = {ICDS'14},
title = {Sensitivity of Information Disclosed in Amazon Reviews},
url = {https://personales.upv.es/thinkmind/dl/conferences/icds/icds_2014/icds_2014_1_10_10073.pdf},
year = 2014,
}
|
|
[49]
|
WebWinnow: leveraging exploit kit workflows to detect malicious urls
(Birhanu Eshete, V. N. Venkatakrishnan)
Fourth ACM Conference on Data and Application Security and Privacy, CODASPY'14, San Antonio, TX, USA - March 03 - 05, 2014 (CODASPY'14), pp. 305–312 Acceptance rate: 19/119, 15.9%
Abstract
Organized cybercrime on the Internet is proliferating due to exploit kits. Attacks launched through these kits include drive-bydownloads, spam and denial-of-service. In this paper, we tackle the problem of detecting whether a given URL is hosted by an exploit kit. Through an extensive analysis of the workflows of about 40 different exploit kits, we develop an approach that uses machine learning to detect whether a given URL is hosting an exploit kit. Central to our approach is the design of distinguishing features that are drawn from the analysis of attack-centric and self-defense behaviors of exploit kits. This design is based on observations drawn from exploit kits that we installed in a laboratory setting as well as live exploit kits that were hosted on the Web. We discuss the design and implementation of a system called WEBWINNOW that is based on this approach. Extensive experiments with real world malicious URLs reveal that WEBWINNOW is highly effective in the detection of malicious URLs hosted by exploit kits with very low false-positives.
►bibtex
PDF DOI: 10.1145/2557547.2557575
@inproceedings{Eshete:codaspy14,
abstract = {Organized cybercrime on the Internet is proliferating due to exploit kits. Attacks launched through these kits include drive-bydownloads, spam and denial-of-service. In this paper, we tackle the problem of detecting whether a given URL is hosted by an exploit kit. Through an extensive analysis of the workflows of about 40 different exploit kits, we develop an approach that uses machine learning to detect whether a given URL is hosting an exploit kit. Central to our approach is the design of distinguishing features that are drawn from the analysis of attack-centric and self-defense behaviors of exploit kits. This design is based on observations drawn from exploit kits that we installed in a laboratory setting as well as live exploit kits that were hosted on the Web. We discuss the design and implementation of a system called WEBWINNOW that is based on this approach. Extensive experiments with real world malicious URLs reveal that WEBWINNOW is highly effective in the detection of malicious URLs hosted by exploit kits with very low false-positives.},
address = {San Antonio, TX, USA},
annote = {Acceptance rate: 19/119, 15.9%},
author = {Birhanu Eshete and V. N. Venkatakrishnan},
booktitle = {Fourth {ACM} Conference on Data and Application Security and Privacy, CODASPY'14, San Antonio, TX, {USA} - March 03 - 05, 2014},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:03:11 -0600},
doi = {10.1145/2557547.2557575},
keywords = {exploit kits;web security;malware;},
month = {Mar},
pages = {305--312},
publisher = {{ACM}},
series = {{CODASPY}'14},
title = {WebWinnow: leveraging exploit kit workflows to detect malicious urls},
url = {https://doi.org/10.1145/2557547.2557575},
year = 2014,
bdsk-url-1 = {https://dblp.org/rec/conf/codaspy/EsheteV14},
bdsk-url-2 = {https://doi.org/10.1145/2557547.2557575},
}
|
|
[48]
|
Minimizing lifetime of sensitive data in concurrent programs (Refereed Poster)
(Kalpana Gondi, A. Prasad Sistla, V. N. Venkatakrishnan)
Fourth ACM Conference on Data and Application Security and Privacy (CODASPY'14), pp. 171–174
Abstract
The prolonged lifetime of sensitive data (such as passwords) in applications gives rise to several security risks. A promising approach is to erase sensitive data in an "eager fashion", i.e., as soon as its use is no longer required in the application. This approach of minimizing the lifetime of sensitive data has been applied to sequential programs. In this short paper, we present an extension of the this approach to concurrent programs where the interleaving of threads makes such eager erasures a challenging research problem.
►bibtex
PDF DOI: 10.1145/2557547.2557589
@inproceedings{Gondi:codaspy14,
abstract = {The prolonged lifetime of sensitive data (such as passwords) in applications gives rise to several security risks. A promising approach is to erase sensitive data in an "eager fashion", i.e., as soon as its use is no longer required in the application. This approach of minimizing the lifetime of sensitive data has been applied to sequential programs. In this short paper, we present an extension of the this approach to concurrent programs where the interleaving of threads makes such eager erasures a challenging research problem.},
address = {San Antonio, TX, USA},
author = {Kalpana Gondi and A. Prasad Sistla and V. N. Venkatakrishnan},
booktitle = {Fourth {ACM} Conference on Data and Application Security and Privacy},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-15 13:07:30 -0600},
doi = {10.1145/2557547.2557589},
keywords = {confidentiality; sensitive data leaks; verification;Program analysis;program transformation;Code retrofitting; concurrent applications},
month = {Mar},
pages = {171--174},
publisher = {{ACM}},
series = {CODASPY'14},
title = {Minimizing lifetime of sensitive data in concurrent programs (Refereed Poster)},
url = {https://doi.org/10.1145/2557547.2557589},
year = 2014,
bdsk-url-1 = {https://dblp.org/rec/conf/codaspy/GondiSV14},
bdsk-url-2 = {https://doi.org/10.1145/2557547.2557589},
}
|
|
[47]
|
Digital Check Forgery Attacks on Client Check Truncation Systems
(Rigel Gjomemo, Hafiz Malik, Nilesh Sumb, V. N. Venkatakrishnan, Rashid Ansari)
18th International Conference on Financial Cryptography and Data Security (Lecture Notes in Computer Science, FC'14), 8437, pp. 3–20 31 papers accepted out of 165 submissions, 18.8%
Abstract
In this paper, we present a digital check forgery attack on check processing systems used in online banking that results in check fraud. Such an attack is facilitated by multiple factors: the use of digital images to perform check transactions, advances in image processing technologies, the use of untrusted client-side devices and software, and the modalities of deposit. We note that digital check forgery attacks offer better chances of success in committing fraud when compared with conventional check forgery attacks. We discuss an instance of this attack and find several leading banks vulnerable to digital check forgery.
►bibtex
PDF DOI: 10.1007/978-3-662-45472-5_1
@inproceedings{Gjomemo:FC14,
abstract = {In this paper, we present a digital check forgery attack on check processing systems used in online banking that results in check fraud. Such an attack is facilitated by multiple factors: the use of digital images to perform check transactions, advances in image processing technologies, the use of untrusted client-side devices and software, and the modalities of deposit. We note that digital check forgery attacks offer better chances of success in committing fraud when compared with conventional check forgery attacks. We discuss an instance of this attack and find several leading banks vulnerable to digital check forgery.},
address = {Barbados},
annote = {31 papers accepted out of 165 submissions, 18.8%},
author = {Rigel Gjomemo and Hafiz Malik and Nilesh Sumb and V. N. Venkatakrishnan and Rashid Ansari},
booktitle = {18th International Conference on Financial Cryptography and Data Security},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:45:57 -0600},
doi = {10.1007/978-3-662-45472-5_1},
keywords = {digital forgery; attacks; check truncation; image processing},
month = {Mar},
pages = {3--20},
publisher = {Springer},
series = {Lecture Notes in Computer Science, FC'14},
title = {Digital Check Forgery Attacks on Client Check Truncation Systems},
url = {https://doi.org/10.1007/978-3-662-45472-5_1},
volume = 8437,
year = 2014,
bdsk-url-1 = {https://dblp.org/rec/conf/fc/GjomemoMSVA14},
bdsk-url-2 = {https://doi.org/10.1007/978-3-662-45472-5_1},
bdsk-url-3 = {https://doi.org/10.1007/978-3-662-45472-5%5C_1},
}
|
|
[46]
|
A Threat Table Based Assessment of Information Security in Telemedicine
(John C. Pendergrass, Karen Heart, C. Ranganathan, V. N. Venkatakrishnan)
International Conference on Health Information Technology Advancement (ICHITA'13), pp. 20–31
Abstract
Information security within healthcare is paramount and telemedicine applications present unique security challenges. Technology is giving rise to new and advanced telemedicine applications and understanding the security threats to these applications is needed to ensure, among other things, the privacy of patient information. This paper presents a high level analysis of a telemedicine application in order to better understand the security threats to this unique and vulnerable environment. This risk analysis is performed using the concept of threat tables. This case study focuses on the capture and representation of salient security threats in telemedicine. To analyze the security threats to an application, we present a threat modeling framework utilizing a table driven approach. Our analysis reveals that even in a highly controlled environment with static locations, the security risks posed by telemedicine applications are significant, and that using a threat table approach provides an easy-to-use and effective method for managing these threats.
►bibtex
PDF
@inproceedings{Pendergrass:ichita13,
abstract = {Information security within healthcare is paramount and telemedicine applications present unique security challenges. Technology is giving rise to new and advanced telemedicine applications and understanding the security threats to these applications is needed to ensure, among other things, the privacy of patient information. This paper presents a high level analysis of a telemedicine application in order to better understand the security threats to this unique and vulnerable environment. This risk analysis is performed using the concept of threat tables. This case study focuses on the capture and representation of salient security threats in telemedicine. To analyze the security threats to an application, we present a threat modeling framework utilizing a table driven approach. Our analysis reveals that even in a highly controlled environment with static locations, the security risks posed by telemedicine applications are significant, and that using a threat table approach provides an easy-to-use and effective method for managing these threats.},
address = {Kalamazoo, Michigan, USA},
author = {John C. Pendergrass and Karen Heart and C. Ranganathan and V. N. Venkatakrishnan},
booktitle = {International Conference on Health Information Technology Advancement},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:21:08 -0600},
keywords = {telemedicine; health; Security; Privacy},
month = {Oct},
number = 4,
pages = {20--31},
series = {{ICHITA}'13},
title = {A Threat Table Based Assessment of Information Security in Telemedicine},
url = {https://scholarworks.wmich.edu/cgi/viewcontent.cgi?article=1033&context=ichita_transactions},
year = 2013,
bdsk-url-1 = {https://dblp.org/rec/journals/ijhisi/PendergrassHRV14},
bdsk-url-2 = {https://doi.org/10.4018/ijhisi.2014100102},
bdsk-url-3 = {https://scholarworks.wmich.edu/cgi/viewcontent.cgi?article=1033&context=ichita_transactions},
}
|
|
[45]
|
SafeScript: JavaScript Transformation for Policy Enforcement
(Mike Ter Louw, Phu H. Phung, Rohini Krishnamurti, V. N. Venkatakrishnan)
18th Nordic Conference on Secure IT Systems (Lecture Notes in Computer Science, NordSec'13), 8208, pp. 67–83
Abstract
Approaches for safe execution of JavaScript on web pages have been a topic of recent research interest. A significant number of these approaches aim to provide safety through runtime mediation of accesses made by a JavaScript program. In this paper, we propose a novel, lightweight JavaScript transformation technique for enforcing security properties on untrusted JavaScript programs using source code interposition. Our approach assures namespace isolation between several principals within a single web page, and access control for sensitive browser interfaces. This access control mechanism is based on a whitelist approach to ensure soundness of the mediation. Our technique is lightweight, resulting in low run-time overhead compared to existing solutions such as BrowserShield and Caja.
►bibtex
PDF DOI: 10.1007/978-3-642-41488-6_5
@inproceedings{Louw:nordsec13,
abstract = {Approaches for safe execution of JavaScript on web pages have been a topic of recent research interest. A significant number of these approaches aim to provide safety through runtime mediation of accesses made by a JavaScript program. In this paper, we propose a novel, lightweight JavaScript transformation technique for enforcing security properties on untrusted JavaScript programs using source code interposition. Our approach assures namespace isolation between several principals within a single web page, and access control for sensitive browser interfaces. This access control mechanism is based on a whitelist approach to ensure soundness of the mediation. Our technique is lightweight, resulting in low run-time overhead compared to existing solutions such as BrowserShield and Caja.},
address = {Ilulissat, Greenland},
author = {Mike Ter Louw and Phu H. Phung and Rohini Krishnamurti and V. N. Venkatakrishnan},
booktitle = {18th Nordic Conference on Secure {IT} Systems},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:46:08 -0600},
doi = {10.1007/978-3-642-41488-6_5},
keywords = {JavaScript; web application security;program transformation;Code retrofitting},
month = {Oct},
pages = {67--83},
publisher = {Springer},
series = {Lecture Notes in Computer Science, NordSec'13},
title = {SafeScript: JavaScript Transformation for Policy Enforcement},
url = {https://doi.org/10.1007/978-3-642-41488-6_5},
volume = 8208,
year = 2013,
bdsk-url-1 = {https://dblp.org/rec/conf/nordsec/LouwPKV13},
bdsk-url-2 = {https://doi.org/10.1007/978-3-642-41488-6_5},
bdsk-url-3 = {https://doi.org/10.1007/978-3-642-41488-6%5C_5},
}
|
|
[44]
|
CAVEAT: Facilitating Interactive and Secure Client-Side Validators for Ruby on Rails applications
(Timothy L. Hinrichs, Michael Cueno, Daniel Ruiz, V. N. Venkatakrishnan, Lenore D. Zuck)
The Seventh International Conference on Emerging Security Information, Systems and Technologies (SECUREWARE'13)
Abstract
Modern web applications validate user-supplied data in two places: the server (to protect against attacks such as parameter tampering) and the client (to give the user a rich, interactive data-entry experience). However, today's web development frameworks provide little support for ensuring that client- and server-side validation is kept in sync. In this paper, we introduce CAVEAT, a tool that automatically creates clientside input validation for Ruby on Rails applications by analyzing server-side validation routines. The effectiveness of CAVEAT for new applications is demonstrated by developing three custom apps, and its applicability to existing applications is demonstrated by examining 25 open-source applications.
►bibtex
PDF
@inproceedings{Hinrichs:secureware13,
abstract = {Modern web applications validate user-supplied data in two places: the server (to protect against attacks such as parameter tampering) and the client (to give the user a rich, interactive data-entry experience). However, today's web development frameworks provide little support for ensuring that client- and server-side validation is kept in sync. In this paper, we introduce CAVEAT, a tool that automatically creates clientside input validation for Ruby on Rails applications by analyzing server-side validation routines. The effectiveness of CAVEAT for new applications is demonstrated by developing three custom apps, and its applicability to existing applications is demonstrated by examining 25 open-source applications.},
address = {Barcelona, Spain},
author = {Timothy L. Hinrichs and Michael Cueno and Daniel Ruiz and V. N. Venkatakrishnan and Lenore D. Zuck},
booktitle = {The Seventh International Conference on Emerging Security Information, Systems and Technologies},
date-added = {2026-02-15 11:28:16 -0600},
date-modified = {2026-02-15 11:33:31 -0600},
keywords = {web application security; program synthesis; Ruby on Rails},
month = {Aug},
series = {SECUREWARE'13},
title = {CAVEAT: Facilitating Interactive and Secure Client-Side Validators for Ruby on Rails applications},
url = {https://api.semanticscholar.org/CorpusID:64566423},
year = 2013,
bdsk-url-1 = {https://api.semanticscholar.org/CorpusID:64566423},
}
|
|
[43]
|
WEBLOG: a declarative language for secure web development
(Timothy L. Hinrichs, Daniele Rossetti, Gabriele Petronella, V. N. Venkatakrishnan, A. Prasad Sistla, Lenore D. Zuck)
Proceedings of the 2013 ACM SIGPLAN Workshop on Programming Languages and Analysis for Security, PLAS (PLAS'13), pp. 59–70
Abstract
WEBLOG is a declarative language for web application development designed to automatically eliminate several security vulnerabilities common to today's web applications. In this paper, we introduce Weblog, detail the security vulnerabilities it eliminates, and discuss how those vulnerabilities are eliminated. We then evaluate Weblog's ability to build and secure real-world applications by comparing traditional implementations of 3 existing small- to medium-size web applications to Weblog implementations.
►bibtex
PDF DOI: 10.1145/2465106.2465119
@inproceedings{Hinrichs:plas13,
abstract = {WEBLOG is a declarative language for web application development designed to automatically eliminate several security vulnerabilities common to today's web applications. In this paper, we introduce Weblog, detail the security vulnerabilities it eliminates, and discuss how those vulnerabilities are eliminated. We then evaluate Weblog's ability to build and secure real-world applications by comparing traditional implementations of 3 existing small- to medium-size web applications to Weblog implementations.},
address = {Seattle, WA, USA},
author = {Timothy L. Hinrichs and Daniele Rossetti and Gabriele Petronella and V. N. Venkatakrishnan and A. Prasad Sistla and Lenore D. Zuck},
booktitle = {Proceedings of the 2013 {ACM} {SIGPLAN} Workshop on Programming Languages and Analysis for Security, {PLAS}},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:21:30 -0600},
doi = {10.1145/2465106.2465119},
keywords = {web application security; Program analysis; declarative languages; verification; formal methods},
month = {June},
pages = {59--70},
publisher = {{ACM}},
series = {{PLAS}'13},
title = {{WEBLOG:} a declarative language for secure web development},
url = {https://doi.org/10.1145/2465106.2465119},
year = 2013,
bdsk-url-1 = {https://dblp.org/rec/conf/pldi/HinrichsRPVSZ13},
bdsk-url-2 = {https://doi.org/10.1145/2465106.2465119},
}
|
|
[42]
|
TamperProof: a server-agnostic defense for parameter tampering attacks on web applications
(Nazari Skrupsky, Prithvi Bisht, Timothy L. Hinrichs, V. N. Venkatakrishnan, Lenore D. Zuck)
Third ACM Conference on Data and Application Security and Privacy (CODASPY'13), pp. 129–140 24 papers accepted out of 107 submissions, 22.4%
Abstract
Parameter tampering attacks are dangerous to a web application whose server performs weaker data sanitization than its client. This paper presents TamperProof, a methodology and tool that offers a novel and efficient mechanism to protect Web applications from parameter tampering attacks. TamperProof is an online defense deployed in a trusted environment between the client and server and requires no access to, or knowledge of, the server side codebase, making it effective for both new and legacy applications. The paper reports on experiments that demonstrate TamperProof's power in efficiently preventing all known parameter tampering vulnerabilities on ten different applications.
►bibtex
PDF DOI: 10.1145/2435349.2435365
@inproceedings{Skrupsky:codaspy13,
abstract = {Parameter tampering attacks are dangerous to a web application whose server performs weaker data sanitization than its client. This paper presents TamperProof, a methodology and tool that offers a novel and efficient mechanism to protect Web applications from parameter tampering attacks. TamperProof is an online defense deployed in a trusted environment between the client and server and requires no access to, or knowledge of, the server side codebase, making it effective for both new and legacy applications. The paper reports on experiments that demonstrate TamperProof's power in efficiently preventing all known parameter tampering vulnerabilities on ten different applications.},
address = {San Antonio, TX, USA},
annote = {24 papers accepted out of 107 submissions, 22.4%},
author = {Nazari Skrupsky and Prithvi Bisht and Timothy L. Hinrichs and V. N. Venkatakrishnan and Lenore D. Zuck},
booktitle = {Third {ACM} Conference on Data and Application Security and Privacy},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-15 10:38:55 -0600},
doi = {10.1145/2435349.2435365},
keywords = {parameter tampering;web application security;Code retrofitting},
month = {Feb},
pages = {129--140},
publisher = {{ACM}},
series = {CODASPY'13},
title = {TamperProof: a server-agnostic defense for parameter tampering attacks on web applications},
url = {https://doi.org/10.1145/2435349.2435365},
year = 2013,
bdsk-url-1 = {https://dblp.org/rec/conf/codaspy/SkrupskyBHVZ13},
bdsk-url-2 = {https://doi.org/10.1145/2435349.2435365},
}
|
|
[41]
|
WAVES: Automatic Synthesis of Client-Side Validation Code for Web Applications
(Nazari Skrupsky, Maliheh Monshizadeh, Prithvi Bisht, Timothy L. Hinrichs, V. N. Venkatakrishnan, Lenore D. Zuck)
2012 ASE International Conference on Cyber Security (ICS'12), pp. 46–53
Abstract
The current practice of web application development treats the client and server components of the application as two separate but interacting pieces of software. Each component is written independently, usually in distinct programming languages and development platforms --- a process known to be prone to errors when the client and server share application logic. When the client and server are out of sync, an ``impedance mismatch'' occurs, often leading to software vulnerabilities as demonstrated by recent work on parameter tampering. This paper outlines the groundwork for a new software development approach, WAVES, where developers author the server-side application logic and rely on tools to automatically synthesize the corresponding clientside application logic. WAVES employs program analysis techniques to extract a logical specification from the server, from which it synthesizes client code. WAVES also synthesizes interactive client interfaces that include asynchronous callbacks whose performance and coverage rival that of manually written clients while ensuring no new security vulnerabilities are introduced. The effectiveness of WAVES is demonstrated and evaluated on three real-world web applications.
►bibtex
PDF DOI: 10.1109/CYBERSECURITY.2012.13
@inproceedings{Skrupsky:ase12,
abstract = {The current practice of web application development treats the client and server components of the application as two separate but interacting pieces of software. Each component is written independently, usually in distinct programming languages and development platforms --- a process known to be prone to errors when the client and server share application logic. When the client and server are out of sync, an ``impedance mismatch'' occurs, often leading to software vulnerabilities as demonstrated by recent work on parameter tampering. This paper outlines the groundwork for a new software development approach, WAVES, where developers author the server-side application logic and rely on tools to automatically synthesize the corresponding clientside application logic. WAVES employs program analysis techniques to extract a logical specification from the server, from which it synthesizes client code. WAVES also synthesizes interactive client interfaces that include asynchronous callbacks whose performance and coverage rival that of manually written clients while ensuring no new security vulnerabilities are introduced. The effectiveness of WAVES is demonstrated and evaluated on three real-world web applications.},
address = {Alexandria, VA, USA},
author = {Nazari Skrupsky and Maliheh Monshizadeh and Prithvi Bisht and Timothy L. Hinrichs and V. N. Venkatakrishnan and Lenore D. Zuck},
booktitle = {2012 {ASE} International Conference on Cyber Security},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:21:52 -0600},
doi = {10.1109/CYBERSECURITY.2012.13},
keywords = {parameter tampering;web application security;program synthesis;},
month = {Dec},
pages = {46--53},
publisher = {{IEEE} Computer Society},
series = {{ICS}'12},
title = {{WAVES:} Automatic Synthesis of Client-Side Validation Code for Web Applications},
url = {https://doi.org/10.1109/CyberSecurity.2012.13},
year = 2012,
bdsk-url-1 = {https://dblp.org/rec/conf/cybersecurity/SkrupskyMBHVZ12},
bdsk-url-2 = {https://doi.org/10.1109/CyberSecurity.2012.13},
}
|
|
[40]
|
Proceedings of 8th International Conference on Information Systems Security, ICISS 2012,
(V. N. Venkatakrishnan, Diganta Goswami)
Proceedings of 8th International Conference on Information Systems Security, ICISS 2012, (Lecture Notes in Computer Science), 7671
Abstract
This book constitutes the refereed proceedings of the 8th International Conference on Information Systems Security, ICISS 2012, held in Guwahati, India, in December 2012. The 18 revised full papers and 3 short papers presented were carefully reviewed and selected from 72 submissions. The papers are organized in topical sections on software security, acces control, covert communications, network security, and database and distributed systems security.
►bibtex
PDF DOI: 10.1007/978-3-642-35130-3
@proceedings{Venkatakrishnan:iciss12,
abstract = {This book constitutes the refereed proceedings of the 8th International Conference on Information Systems Security, ICISS 2012, held in Guwahati, India, in December 2012. The 18 revised full papers and 3 short papers presented were carefully reviewed and selected from 72 submissions. The papers are organized in topical sections on software security, acces control, covert communications, network security, and database and distributed systems security.},
address = {Guwahati, India},
author = {V. N. Venkatakrishnan and Diganta Goswami},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 21:44:45 -0600},
doi = {10.1007/978-3-642-35130-3},
editor = {V. N. Venkatakrishnan and Diganta Goswami},
isbn = {978-3-642-35129-7},
keywords = {Security},
month = {Dec},
number = {},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
title = {Proceedings of 8th International Conference on Information Systems Security, {ICISS} 2012,},
url = {https://doi.org/10.1007/978-3-642-35130-3},
volume = 7671,
year = 2012,
bdsk-url-1 = {https://dblp.org/rec/conf/iciss/2012},
bdsk-url-2 = {https://doi.org/10.1007/978-3-642-35130-3},
}
|
|
[39]
|
Don't Repeat Yourself: Automatically Synthesizing Client-side Validation Code for Web Applications
(Nazari Skrupsky, Maliheh Monshizadeh, Prithvi Bisht, Timothy L. Hinrichs, V. N. Venkatakrishnan, Lenore D. Zuck)
3rd USENIX Conference on Web Application Development, WebApps'12, Boston, MA, USA, June 13, 2012 (WebApps'12), pp. 107–108
Abstract
We outline the groundwork for a new software development approach where developers author the server-side application logic and rely on tools to automatically synthesize the corresponding client-side application logic. Our approach uses program analysis techniques to extract a logical specification from the server and synthesizes client code from that specification. Our implementation (WAVES) synthesizes interactive client interfaces that include asynchronous callbacks whose performance and coverage rival that of manually written clients, while ensuring that no new security vulnerabilities are introduced.
►bibtex
PDF
@inproceedings{Skrupsky:webapps12,
abstract = {We outline the groundwork for a new software development approach where developers author the server-side application logic and rely on tools to automatically synthesize the corresponding client-side application logic. Our approach uses program analysis techniques to extract a logical specification from the server and synthesizes client code from that specification. Our implementation (WAVES) synthesizes interactive client interfaces that include asynchronous callbacks whose performance and coverage rival that of manually written clients, while ensuring that no new security vulnerabilities are introduced.},
address = {Boston, MA, USA},
author = {Nazari Skrupsky and Maliheh Monshizadeh and Prithvi Bisht and Timothy L. Hinrichs and V. N. Venkatakrishnan and Lenore D. Zuck},
booktitle = {3rd {USENIX} Conference on Web Application Development, WebApps'12, Boston, MA, USA, June 13, 2012},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:22:27 -0600},
keywords = {parameter tampering;web application security;program synthesis;},
month = {June},
pages = {107--108},
publisher = {{USENIX} Association},
series = {{WebApps}'12},
timestamp = {Wed, 04 Jul 2018 13:06:34 +0200},
title = {Don't Repeat Yourself: Automatically Synthesizing Client-side Validation Code for Web Applications},
url = {https://www.usenix.org/conference/webapps12/technical-sessions/presentation/skrupsky},
year = 2012,
bdsk-url-1 = {https://dblp.org/rec/conf/webapps/SkrupskyMBHVZ12},
bdsk-url-2 = {https://www.usenix.org/conference/webapps12/technical-sessions/presentation/skrupsky},
}
|
|
[38]
|
SWIPE: eager erasure of sensitive data in large scale systems software
(Kalpana Gondi, Prithvi Bisht, Praveen Venkatachari, A. Prasad Sistla, V. N. Venkatakrishnan)
Second ACM Conference on Data and Application Security and Privacy, CODASPY 2012, San Antonio, TX, USA, February 7-9, 2012 (CODASPY'12), pp. 295–306 21 out of 113 papers, 18.5%
Abstract
We describe SWIPE, an approach to reduce the life time of sensitive, memory resident data in large scale applications written in C. In contrast to prior approaches that used a delayed or lazy approach to the problem of erasing sensitive data, SWIPE uses a novel eager erasure approach that minimizes the risk of accidental sensitive data leakage. SWIPE achieves this by transforming a legacy C program to include additional instructions that erase sensitive data immediately after its intended use. SWIPE is guided by a highly-scalable static analysis technique that precisely identifies the locations to introduce erase instructions in the original program. The programs transformed using SWIPE enjoy several additional benefits: minimization of leaks that arise due to data dependencies; erasure of sensitive data with minimal developer guidance; and negligible performance overheads.
►bibtex
PDF DOI: 10.1145/2133601.2133638
@inproceedings{Gondi:codaspy12,
abstract = {We describe SWIPE, an approach to reduce the life time of sensitive, memory resident data in large scale applications written in C. In contrast to prior approaches that used a delayed or lazy approach to the problem of erasing sensitive data, SWIPE uses a novel eager erasure approach that minimizes the risk of accidental sensitive data leakage. SWIPE achieves this by transforming a legacy C program to include additional instructions that erase sensitive data immediately after its intended use. SWIPE is guided by a highly-scalable static analysis technique that precisely identifies the locations to introduce erase instructions in the original program. The programs transformed using SWIPE enjoy several additional benefits: minimization of leaks that arise due to data dependencies; erasure of sensitive data with minimal developer guidance; and negligible performance overheads.},
address = {San Antonio, TX, USA},
annote = {21 out of 113 papers, 18.5%},
author = {Kalpana Gondi and Prithvi Bisht and Praveen Venkatachari and A. Prasad Sistla and V. N. Venkatakrishnan},
bibsource = {dblp computer science bibliography, https://dblp.org},
biburl = {https://dblp.org/rec/conf/codaspy/GondiBVSV12.bib},
booktitle = {Second {ACM} Conference on Data and Application Security and Privacy, {CODASPY} 2012, San Antonio, TX, USA, February 7-9, 2012},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:03:29 -0600},
doi = {10.1145/2133601.2133638},
keywords = {confidentiality; sensitive data leaks; verification;Program analysis;program transformation;Code retrofitting},
month = {Feb},
pages = {295--306},
publisher = {{ACM}},
series = {{CODASPY}'12},
timestamp = {Tue, 06 Nov 2018 00:00:00 +0100},
title = {{SWIPE:} eager erasure of sensitive data in large scale systems software},
url = {https://doi.org/10.1145/2133601.2133638},
year = 2012,
bdsk-url-1 = {https://dblp.org/rec/conf/codaspy/GondiBVSV12},
bdsk-url-2 = {https://doi.org/10.1145/2133601.2133638},
}
|
|
[37]
|
WAPTEC: whitebox analysis of web applications for parameter tampering exploit construction
(Prithvi Bisht, Timothy L. Hinrichs, Nazari Skrupsky, V. N. Venkatakrishnan)
Proceedings of the 18th ACM Conference on Computer and Communications Security, CCS 2011, Chicago, Illinois, USA, October 17-21, 2011 (CCS'11), pp. 575–586 60 papers accepted out of 429 submissions, 14%
Abstract
Parameter tampering attacks are dangerous to a web application whose server fails to replicate the validation of user-supplied data that is performed by the client. Malicious users who circumvent the client can capitalize on the missing server validation. In this paper, we describe WAPTEC, a tool that is designed to automatically identify parameter tampering vulnerabilities and generate exploits by construction to demonstrate those vulnerabilities. WAPTEC involves a new approach to whitebox analysis of the server's code. We tested WAPTEC on six open source applications and found previously unknown vulnerabilities in every single one of them.
►bibtex
PDF DOI: 10.1145/2046707.2046774
@inproceedings{Bisht:ccs11,
abstract = {Parameter tampering attacks are dangerous to a web application whose server fails to replicate the validation of user-supplied data that is performed by the client. Malicious users who circumvent the client can capitalize on the missing server validation. In this paper, we describe WAPTEC, a tool that is designed to automatically identify parameter tampering vulnerabilities and generate exploits by construction to demonstrate those vulnerabilities. WAPTEC involves a new approach to whitebox analysis of the server's code. We tested WAPTEC on six open source applications and found previously unknown vulnerabilities in every single one of them.},
address = {Chicago, IL, USA},
annote = {60 papers accepted out of 429 submissions, 14%},
author = {Prithvi Bisht and Timothy L. Hinrichs and Nazari Skrupsky and V. N. Venkatakrishnan},
booktitle = {Proceedings of the 18th {ACM} Conference on Computer and Communications Security, {CCS} 2011, Chicago, Illinois, USA, October 17-21, 2011},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-15 09:54:49 -0600},
doi = {10.1145/2046707.2046774},
keywords = {parameter tampering; web application security; attacks; vulnerability analysis; symbolic evaluation},
month = {Oct},
pages = {575--586},
publisher = {{ACM}},
series = {CCS'11},
title = {{WAPTEC:} whitebox analysis of web applications for parameter tampering exploit construction},
url = {https://doi.org/10.1145/2046707.2046774},
year = 2011,
bdsk-url-1 = {https://dblp.org/rec/conf/ccs/BishtHSV11},
bdsk-url-2 = {https://doi.org/10.1145/2046707.2046774},
}
|
|
[36]
|
Applications of Formal Methods to Web Application Security
(V. N. Venkatakrishnan)
Encyclopedia of Cryptography and Security, 2nd Ed (ECS), pp. 45–46
Abstract
The use of formal methods in web application security refers to the use of techniques such as static analysis and model checking to analyze web application software for security properties.
►bibtex
PDF DOI: 10.1007/978-1-4419-5906-5_856
@incollection{Venkatakrishnan:formal:encyclocs11,
abstract = {The use of formal methods in web application security refers to the use of techniques such as static analysis and model checking to analyze web application software for security properties.},
author = {V. N. Venkatakrishnan},
booktitle = {Encyclopedia of Cryptography and Security, 2nd Ed},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:46:30 -0600},
doi = {10.1007/978-1-4419-5906-5_856},
editor = {Henk C. A. van Tilborg and Sushil Jajodia},
keywords = {web application security; formal methods},
month = {Jul},
pages = {45--46},
publisher = {Springer},
series = {{ECS}},
title = {Applications of Formal Methods to Web Application Security},
url = {https://doi.org/10.1007/978-1-4419-5906-5_856},
year = 2011,
bdsk-url-1 = {https://dblp.org/rec/reference/crypt/Venkatakrishnan11},
bdsk-url-2 = {https://doi.org/10.1007/978-1-4419-5906-5_856},
bdsk-url-3 = {https://doi.org/10.1007/978-1-4419-5906-5%5C_856},
}
|
|
[35]
|
Web Browser Security and Privacy
(V. N. Venkatakrishnan)
Encyclopedia of Cryptography and Security, 2nd Ed (ECS), pp. 1372–1373
Abstract
Web browser security and privacy collectively refers to (a) the integrity of the browser platform that accepts, processes, and communicates end-user data to web sites and (b) the confidentiality and integrity of this information exchanged.
►bibtex
PDF DOI: 10.1007/978-1-4419-5906-5_665
@incollection{Venkatakrishnan:browser:encyclocs11,
abstract = {Web browser security and privacy collectively refers to (a) the integrity of the browser platform that accepts, processes, and communicates end-user data to web sites and (b) the confidentiality and integrity of this information exchanged.},
author = {V. N. Venkatakrishnan},
booktitle = {Encyclopedia of Cryptography and Security, 2nd Ed},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 21:26:52 -0600},
doi = {10.1007/978-1-4419-5906-5_665},
editor = {Henk C. A. van Tilborg and Sushil Jajodia},
keywords = {browser security; browser extension;},
month = {Jul},
pages = {1372--1373},
publisher = {Springer},
series = {{ECS}},
title = {Web Browser Security and Privacy},
url = {https://doi.org/10.1007/978-1-4419-5906-5_665},
year = 2011,
bdsk-url-1 = {https://dblp.org/rec/reference/crypt/Venkatakrishnan11a},
bdsk-url-2 = {https://doi.org/10.1007/978-1-4419-5906-5_665},
bdsk-url-3 = {https://doi.org/10.1007/978-1-4419-5906-5%5C_665},
}
|
|
[34]
|
WebAppArmor: A Framework for Robust Prevention of Attacks on Web Applications (Invited Paper)
(V. N. Venkatakrishnan, Prithvi Bisht, Mike Ter Louw, Michelle Zhou, Kalpana Gondi, Karthik Thotta Ganesh)
Information Systems Security - 6th International Conference, ICISS 2010, Gandhinagar, India, December 17-19, 2010. Proceedings (Lecture Notes in Computer Science, ICISS'10), 6503, pp. 3–26 Invited Paper and Keynote Presentation
Abstract
As the World Wide Web continues to evolve, the number of web-based attacks that target web applications is on the rise. Attacks such as Cross-site Scripting (XSS), SQL Injection and Cross-site Request Forgery (XSRF) are among the topmost threats on the Web, and defending against these attacks is a growing concern. In this paper, we describe WebAppArmor, a framework that is aimed at preventing these attacks on existing (legacy) web applications. The main feature of this framework is that it offers a unified perspective to address these problems in the context of existing web applications. The framework incorporates techniques based on static and dynamic analysis, symbolic evaluation and execution monitoring to retrofit existing web applications to be resilient to these attacks.
►bibtex
PDF DOI: 10.1007/978-3-642-17714-9_2
@inproceedings{Venkatakrishnan:ICISS10,
abstract = {As the World Wide Web continues to evolve, the number of web-based attacks that target web applications is on the rise. Attacks such as Cross-site Scripting (XSS), SQL Injection and Cross-site Request Forgery (XSRF) are among the topmost threats on the Web, and defending against these attacks is a growing concern. In this paper, we describe WebAppArmor, a framework that is aimed at preventing these attacks on existing (legacy) web applications. The main feature of this framework is that it offers a unified perspective to address these problems in the context of existing web applications. The framework incorporates techniques based on static and dynamic analysis, symbolic evaluation and execution monitoring to retrofit existing web applications to be resilient to these attacks.},
address = {Gandhinagar, India},
annote = {Invited Paper and Keynote Presentation},
author = {V. N. Venkatakrishnan and Prithvi Bisht and Mike Ter Louw and Michelle Zhou and Kalpana Gondi and Karthik Thotta Ganesh},
booktitle = {Information Systems Security - 6th International Conference, {ICISS} 2010, Gandhinagar, India, December 17-19, 2010. Proceedings},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:46:47 -0600},
doi = {10.1007/978-3-642-17714-9_2},
keywords = {web application security;content security;SQL injection;cross-site scripting;cross-site request forgery;web advertising},
month = {Dec},
pages = {3--26},
publisher = {Springer},
series = {Lecture Notes in Computer Science, ICISS'10},
title = {WebAppArmor: {A} Framework for Robust Prevention of Attacks on Web Applications (Invited Paper)},
url = {https://doi.org/10.1007/978-3-642-17714-9_2},
volume = 6503,
year = 2010,
bdsk-url-1 = {https://dblp.org/rec/conf/iciss/VenkatakrishnanBLZGG10},
bdsk-url-2 = {https://doi.org/10.1007/978-3-642-17714-9_2},
bdsk-url-3 = {https://doi.org/10.1007/978-3-642-17714-9%5C_2},
}
|
|
[33]
|
Strengthening XSRF Defenses for Legacy Web Applications Using Whitebox Analysis and Transformation
(Michelle Zhou, Prithvi Bisht, V. N. Venkatakrishnan)
Information Systems Security - 6th International Conference, ICISS 2010, Gandhinagar, India, December 17-19, 2010. Proceedings (Lecture Notes in Computer Science), 6503, pp. 96–110 14 papers out of 51 submissions, 27.4%
Abstract
Cross Site Request Forgery (XSRF) is regarded as one of the major threats on the Web. In this paper, we propose an approach that automatically retrofits the source code of legacy web applications with a widely-used defense approach for this attack. Our approach addresses a number of shortcomings in prior blackbox solutions for automatic XSRF protection. Our approach has been implemented in a tool called X-Protect that was used to retrofit several commercial Java-based web applications. Our experimental results demonstrate that the X-Protect approach is both effective and efficient in practice.
►bibtex
PDF DOI: 10.1007/978-3-642-17714-9_8
@inproceedings{Zhou:iciss10,
abstract = {Cross Site Request Forgery (XSRF) is regarded as one of the major threats on the Web. In this paper, we propose an approach that automatically retrofits the source code of legacy web applications with a widely-used defense approach for this attack. Our approach addresses a number of shortcomings in prior blackbox solutions for automatic XSRF protection. Our approach has been implemented in a tool called X-Protect that was used to retrofit several commercial Java-based web applications. Our experimental results demonstrate that the X-Protect approach is both effective and efficient in practice.},
annote = {14 papers out of 51 submissions, 27.4%},
author = {Michelle Zhou and Prithvi Bisht and V. N. Venkatakrishnan},
booktitle = {Information Systems Security - 6th International Conference, {ICISS} 2010, Gandhinagar, India, December 17-19, 2010. Proceedings},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:47:00 -0600},
doi = {10.1007/978-3-642-17714-9_8},
editor = {Somesh Jha and Anish Mathuria},
keywords = {cross-site request forgery;web application security;program transformation;Code retrofitting},
month = {Dec},
pages = {96--110},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
title = {Strengthening {XSRF} Defenses for Legacy Web Applications Using Whitebox Analysis and Transformation},
url = {https://doi.org/10.1007/978-3-642-17714-9_8},
volume = 6503,
year = 2010,
bdsk-url-1 = {https://dblp.org/rec/conf/iciss/ZhouBV10},
bdsk-url-2 = {https://doi.org/10.1007/978-3-642-17714-9_8},
bdsk-url-3 = {https://doi.org/10.1007/978-3-642-17714-9%5C_8},
}
|
|
[32]
|
NoTamper: automatic blackbox detection of parameter tampering opportunities in web applications
(Prithvi Bisht, Timothy L. Hinrichs, Nazari Skrupsky, Radoslaw Bobrowicz, V. N. Venkatakrishnan)
Proceedings of the 17th ACM Conference on Computer and Communications Security, CCS 2010, Chicago, Illinois, USA, October 4-8, 2010 (CCS'10), pp. 607–618 55 papers accepted out of 320, 17.6%. Among the 10 nationwide finalists for the 2010 ATT Award for Best Applied Security Research paper!
Abstract
Web applications rely heavily on client-side computation to examine and validate form inputs that are supplied by a user (e.g., ``credit card expiration date must be valid''). This is typically done for two reasons: to reduce burden on the server and to avoid latencies in communicating with the server. However, when a server fails to replicate the validation performed on the client, it is potentially vulnerable to attack. In this paper, we present a novel approach for automatically detecting potential server-side vulnerabilities of this kind in existing (legacy) web applications through blackbox analysis. We discuss the design and implementation of NOTAMPER, a tool that realizes this approach. NOTAMPER has been employed to discover several previously unknown vulnerabilities in a number of open-source web applications and live web sites.
►bibtex
PDF DOI: 10.1145/1866307.1866375
@inproceedings{Bisht:notamper:ccs10,
abstract = {Web applications rely heavily on client-side computation to examine and validate form inputs that are supplied by a user (e.g., ``credit card expiration date must be valid''). This is typically done for two reasons: to reduce burden on the server and to avoid latencies in communicating with the server. However, when a server fails to replicate the validation performed on the client, it is potentially vulnerable to attack. In this paper, we present a novel approach for automatically detecting potential server-side vulnerabilities of this kind in existing (legacy) web applications through blackbox analysis. We discuss the design and implementation of NOTAMPER, a tool that realizes this approach. NOTAMPER has been employed to discover several previously unknown vulnerabilities in a number of open-source web applications and live web sites.},
address = {Chicago, IL, USA},
annote = {55 papers accepted out of 320, 17.6%.},
author = {Prithvi Bisht and Timothy L. Hinrichs and Nazari Skrupsky and Radoslaw Bobrowicz and V. N. Venkatakrishnan},
booktitle = {Proceedings of the 17th {ACM} Conference on Computer and Communications Security, {CCS} 2010, Chicago, Illinois, USA, October 4-8, 2010},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 22:08:13 -0600},
doi = {10.1145/1866307.1866375},
keywords = {parameter tampering; web application security; attacks; vulnerability analysis; symbolic evaluation},
month = {Oct},
note = {Among the 10 nationwide finalists for the 2010 ATT Award for Best Applied Security Research paper!},
pages = {607--618},
publisher = {{ACM}},
series = {CCS'10},
title = {NoTamper: automatic blackbox detection of parameter tampering opportunities in web applications},
url = {https://doi.org/10.1145/1866307.1866375},
year = 2010,
bdsk-url-1 = {https://dblp.org/rec/conf/ccs/BishtHSBV10},
bdsk-url-2 = {https://doi.org/10.1145/1866307.1866375},
}
|
|
[31]
|
TAPS: automatically preparing safe SQL queries
(Prithvi Bisht, A. Prasad Sistla, V. N. Venkatakrishnan)
Proceedings of the 17th ACM Conference on Computer and Communications Security, CCS 2010, Chicago, Illinois, USA, October 4-8, 2010 (CCS'10), pp. 645–647
Abstract
We present the first sound program transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications. This extended abstract is based on our paper[4] that appeared in the Financial Cryptography and Data Security (FC'2010) conference.
►bibtex
PDF DOI: 10.1145/1866307.1866384
@inproceedings{Bisht:taps:CCS10,
abstract = {We present the first sound program transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications. This extended abstract is based on our paper[4] that appeared in the Financial Cryptography and Data Security (FC'2010) conference.},
author = {Prithvi Bisht and A. Prasad Sistla and V. N. Venkatakrishnan},
booktitle = {Proceedings of the 17th {ACM} Conference on Computer and Communications Security, {CCS} 2010, Chicago, Illinois, USA, October 4-8, 2010},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:23:03 -0600},
doi = {10.1145/1866307.1866384},
editor = {Ehab Al{-}Shaer and Angelos D. Keromytis and Vitaly Shmatikov},
keywords = {SQL injection;program transformation;Program analysis;Code retrofitting;symbolic evaluation},
month = {Oct},
pages = {645--647},
publisher = {{ACM}},
series = {{CCS}'10},
title = {{TAPS:} automatically preparing safe {SQL} queries},
url = {https://doi.org/10.1145/1866307.1866384},
year = 2010,
bdsk-url-1 = {https://dblp.org/rec/conf/ccs/BishtSV10},
bdsk-url-2 = {https://doi.org/10.1145/1866307.1866384},
}
|
|
[30]
|
AdJail: Practical Enforcement of Confidentiality and Integrity Policies on Web Advertisements
(Mike Ter Louw, Karthik Thotta Ganesh, V. N. Venkatakrishnan)
19th USENIX Security Symposium, Washington, DC, USA, August 11-13, 2010, Proceedings (SEC'10), pp. 371–388 30 papers accepted out of 202, 14.8%
Abstract
Web publishers frequently integrate third-party advertisements into web pages that also contain sensitive publisher data and end-user personal data. This practice exposes sensitive page content to confidentiality and integrity attacks launched by advertisements. In this paper, we propose a novel framework for addressing security threats posed by third-party advertisements. The heart of our framework is an innovative isolation mechanism that enables publishers to transparently interpose between advertisements and end users. The mechanism supports finegrained policy specification and enforcement, and does not affect the user experience of interactive ads. Evaluation of our framework suggests compatibility with several mainstream ad networks, security from many threats from advertisements and acceptable performance overheads.
►bibtex
PDF
@inproceedings{Louw:usenixsec10,
abstract = {Web publishers frequently integrate third-party advertisements into web pages that also contain sensitive publisher data and end-user personal data. This practice exposes sensitive page content to confidentiality and integrity attacks launched by advertisements. In this paper, we propose a novel framework for addressing security threats posed by third-party advertisements. The heart of our framework is an innovative isolation mechanism that enables publishers to transparently interpose between advertisements and end users. The mechanism supports finegrained policy specification and enforcement, and does not affect the user experience of interactive ads. Evaluation of our framework suggests compatibility with several mainstream ad networks, security from many threats from advertisements and acceptable performance overheads.},
address = {Washington D.C., USA},
annote = {30 papers accepted out of 202, 14.8%},
author = {Mike Ter Louw and Karthik Thotta Ganesh and V. N. Venkatakrishnan},
booktitle = {19th {USENIX} Security Symposium, Washington, DC, USA, August 11-13, 2010, Proceedings},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:03:46 -0600},
keywords = {web advertising;content security;cross-site scripting;sandboxing;isolated execution},
month = {Aug},
pages = {371--388},
publisher = {{USENIX} Association},
series = {{SEC}'10},
title = {AdJail: Practical Enforcement of Confidentiality and Integrity Policies on Web Advertisements},
url = {http://www.usenix.org/events/sec10/tech/full\_papers/TerLouw.pdf},
year = 2010,
bdsk-url-1 = {https://dblp.org/rec/conf/uss/LouwGV10},
bdsk-url-2 = {http://www.usenix.org/events/sec10/tech/full_papers/TerLouw.pdf},
bdsk-url-3 = {http://www.usenix.org/events/sec10/tech/full%5C_papers/TerLouw.pdf},
}
|
|
[29]
|
CANDID: Dynamic candidate evaluations for automatic prevention of SQL injection attacks
(Prithvi Bisht, Parthasarathy Madhusudan, V. N. Venkatakrishnan)
ACM Transactions on Information Systems Security (TISSEC'10), 13, pp. 14:1–14:39
Abstract
SQL injection attacks are one of the top-most threats for applications written for the Web. These attacks are launched through specially crafted user inputs, on Web applications that use low-level string operations to construct SQL queries. In this work, we exhibit a novel and powerful scheme for automatically transforming Web applications to render them safe against all SQL injection attacks. A characteristic diagnostic feature of SQL injection attacks is that they change the intended structure of queries issued. Our technique for detecting SQL injection is to dynamically mine the programmer-intended query structure on any input, and detect attacks by comparing it against the structure of the actual query issued. We propose a simple and novel mechanism, called Candid, for mining programmer intended queries by dynamically evaluating runs over benign candidate inputs. This mechanism is theoretically well founded and is based on inferring intended queries by considering the symbolic query computed on a program run. Our approach has been implemented in a tool called Candid that retrofits Web applications written in Java to defend them against SQL injection attacks. We have also implemented Candid by modifying a Java Virtual Machine, which safeguards applications without requiring retrofitting. We report extensive experimental results that show that our approach performs remarkably well in practice.
►bibtex
PDF DOI: 10.1145/1698750.1698754
@article{Bisht:tissec10,
abstract = {SQL injection attacks are one of the top-most threats for applications written for the Web. These attacks are launched through specially crafted user inputs, on Web applications that use low-level string operations to construct SQL queries. In this work, we exhibit a novel and powerful scheme for automatically transforming Web applications to render them safe against all SQL injection attacks. A characteristic diagnostic feature of SQL injection attacks is that they change the intended structure of queries issued. Our technique for detecting SQL injection is to dynamically mine the programmer-intended query structure on any input, and detect attacks by comparing it against the structure of the actual query issued. We propose a simple and novel mechanism, called Candid, for mining programmer intended queries by dynamically evaluating runs over benign candidate inputs. This mechanism is theoretically well founded and is based on inferring intended queries by considering the symbolic query computed on a program run. Our approach has been implemented in a tool called Candid that retrofits Web applications written in Java to defend them against SQL injection attacks. We have also implemented Candid by modifying a Java Virtual Machine, which safeguards applications without requiring retrofitting. We report extensive experimental results that show that our approach performs remarkably well in practice.},
author = {Prithvi Bisht and Parthasarathy Madhusudan and V. N. Venkatakrishnan},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:23:23 -0600},
doi = {10.1145/1698750.1698754},
journal = {{ACM} Transactions on Information Systems Security},
keywords = {SQL injection;Program analysis;program transformation;Code retrofitting;symbolic evaluation;runtime monitoring;virtual machine},
month = {Feb},
number = 2,
pages = {14:1--14:39},
series = {{TISSEC}'10},
title = {{CANDID:} Dynamic candidate evaluations for automatic prevention of {SQL} injection attacks},
url = {https://doi.org/10.1145/1698750.1698754},
volume = 13,
year = 2010,
bdsk-url-1 = {https://dblp.org/rec/journals/tissec/BishtMV10},
bdsk-url-2 = {https://doi.org/10.1145/1698750.1698754},
}
|
|
[28]
|
Automatically Preparing Safe SQL Queries
(Prithvi Bisht, A. Prasad Sistla, V. N. Venkatakrishnan)
Financial Cryptography and Data Security, 14th International Conference, FC 2010, Tenerife, Canary Islands, Spain, January 25-28, 2010, Revised Selected Papers (Lecture Notes in Computer Science, FC'10), 6052, pp. 272–288 19 papers accepted out of 130, 14.6%
Abstract
We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.
►bibtex
PDF DOI: 10.1007/978-3-642-14577-3_21
@inproceedings{Bisht:FC10,
abstract = {We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.},
address = {Tenerife, Spain},
annote = {19 papers accepted out of 130, 14.6%},
author = {Prithvi Bisht and A. Prasad Sistla and V. N. Venkatakrishnan},
booktitle = {Financial Cryptography and Data Security, 14th International Conference, {FC} 2010, Tenerife, Canary Islands, Spain, January 25-28, 2010, Revised Selected Papers},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:48:24 -0600},
doi = {10.1007/978-3-642-14577-3_21},
keywords = {SQL injection;program transformation;Program analysis;Code retrofitting;symbolic evaluation},
month = {Jan},
pages = {272--288},
publisher = {Springer},
series = {Lecture Notes in Computer Science, FC'10},
title = {Automatically Preparing Safe {SQL} Queries},
url = {https://doi.org/10.1007/978-3-642-14577-3_21},
volume = 6052,
year = 2010,
bdsk-url-1 = {https://dblp.org/rec/conf/fc/BishtSV10},
bdsk-url-2 = {https://doi.org/10.1007/978-3-642-14577-3_21},
bdsk-url-3 = {https://doi.org/10.1007/978-3-642-14577-3%5C_21},
}
|
|
[27]
|
Blueprint: Robust Prevention of Cross-site Scripting Attacks for Existing Browsers
(Mike Ter Louw, V. N. Venkatakrishnan)
30th IEEE Symposium on Security and Privacy (SP 2009), 17-20 May 2009, Oakland, California, USA (IEEESP'09), pp. 331–346 26 out of 254 papers, 10.2% 2009 ATT Best Applied Security Research paper at CSAW09!
Abstract
As social networking sites proliferate across the World Wide Web, complex user-created HTML content is rapidly becoming the norm rather than the exception. User-created web content is a notorious vector for cross-site scripting (XSS) attacks that target websites and confidential user data. In this threat climate, mechanisms that render web applications immune to XSS attacks have been of recent research interest. A challenge for these security mechanisms is enabling web applications to accept complex HTML input from users, while disallowing malicious script content. This challenge is made difficult by anomalous web browser behaviors, which are often used as vectors for successful XSS attacks. Motivated by this problem, we present a new XSS defense strategy designed to be effective in widely deployed existing web browsers, despite anomalous browser behavior. Our approach seeks to minimize trust placed on browsers for interpreting untrusted content. We implemented this approach in a tool called BLUEPRINT that was integrated with several popular web applications. We evaluated BLUEPRINT against a barrage of stress tests that demonstrate strong resistance to attacks, excellent compatibility with web browsers and reasonable performance overheads.
►bibtex
PDF DOI: 10.1109/SP.2009.33
@inproceedings{Louw:ieeesp09,
abstract = {As social networking sites proliferate across the World Wide Web, complex user-created HTML content is rapidly becoming the norm rather than the exception. User-created web content is a notorious vector for cross-site scripting (XSS) attacks that target websites and confidential user data. In this threat climate, mechanisms that render web applications immune to XSS attacks have been of recent research interest. A challenge for these security mechanisms is enabling web applications to accept complex HTML input from users, while disallowing malicious script content. This challenge is made difficult by anomalous web browser behaviors, which are often used as vectors for successful XSS attacks. Motivated by this problem, we present a new XSS defense strategy designed to be effective in widely deployed existing web browsers, despite anomalous browser behavior. Our approach seeks to minimize trust placed on browsers for interpreting untrusted content. We implemented this approach in a tool called BLUEPRINT that was integrated with several popular web applications. We evaluated BLUEPRINT against a barrage of stress tests that demonstrate strong resistance to attacks, excellent compatibility with web browsers and reasonable performance overheads.},
address = {Oakland, CA, USA},
annote = {26 out of 254 papers, 10.2%},
author = {Mike Ter Louw and V. N. Venkatakrishnan},
booktitle = {30th {IEEE} Symposium on Security and Privacy {(SP} 2009), 17-20 May 2009, Oakland, California, {USA}},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 22:22:43 -0600},
doi = {10.1109/SP.2009.33},
keywords = {cross-site scripting;browser security;web security;},
month = {May},
note = {2009 ATT Best Applied Security Research paper at CSAW09!},
pages = {331--346},
publisher = {{IEEE} Computer Society},
series = {{IEEESP}'09},
title = {Blueprint: Robust Prevention of Cross-site Scripting Attacks for Existing Browsers},
url = {https://doi.org/10.1109/SP.2009.33},
year = 2009,
bdsk-url-1 = {https://dblp.org/rec/conf/sp/LouwV09},
bdsk-url-2 = {https://doi.org/10.1109/SP.2009.33},
}
|
|
[26]
|
Alcatraz: An Isolated Environment for Experimenting with Untrusted Software
(Zhenkai Liang, Weiqing Sun, V. N. Venkatakrishnan, R. Sekar)
ACM Transactions on Information Systems Security (TISSEC'09), 12, pp. 14:1–14:37
Abstract
In this article, we present an approach for realizing a safe execution environment (SEE) that enables users to ``try out'' new software (or configuration changes to existing software) without the fear of damaging the system in any manner. A key property of our SEE is that it faithfully reproduces the behavior of applications, as if they were running natively on the underlying (host) operating system. This is accomplished via one-way isolation: processes running within the SEE are given read-access to the environment provided by the host OS, but their write operations are prevented from escaping outside the SEE. As a result, SEE processes cannot impact the behavior of host OS processes, or the integrity of data on the host OS. SEEs support a wide range of tasks, including: study of malicious code, controlled execution of untrusted software, experimentation with software configuration changes, testing of software patches, and so on. It provides a convenient way for users to inspect system changes made within the SEE. If these changes are not accepted, they can be rolled back at the click of a button. Otherwise, the changes can be committed so as to become visible outside the SEE. We provide consistency criteria that ensure semantic consistency of the committed results. We develop two different implementation approaches, one in user-land and the other in the OS kernel, for realizing a safe-execution environment. Our implementation results show that most software, including fairly complex server and client applications, can run successfully within our SEEs. It introduces low performance overheads, typically below 10 percent.
►bibtex
PDF DOI: 10.1145/1455526.1455527
@article{Liang:tissec09,
abstract = {In this article, we present an approach for realizing a safe execution environment (SEE) that enables users to ``try out'' new software (or configuration changes to existing software) without the fear of damaging the system in any manner. A key property of our SEE is that it faithfully reproduces the behavior of applications, as if they were running natively on the underlying (host) operating system. This is accomplished via one-way isolation: processes running within the SEE are given read-access to the environment provided by the host OS, but their write operations are prevented from escaping outside the SEE. As a result, SEE processes cannot impact the behavior of host OS processes, or the integrity of data on the host OS. SEEs support a wide range of tasks, including: study of malicious code, controlled execution of untrusted software, experimentation with software configuration changes, testing of software patches, and so on. It provides a convenient way for users to inspect system changes made within the SEE. If these changes are not accepted, they can be rolled back at the click of a button. Otherwise, the changes can be committed so as to become visible outside the SEE. We provide consistency criteria that ensure semantic consistency of the committed results. We develop two different implementation approaches, one in user-land and the other in the OS kernel, for realizing a safe-execution environment. Our implementation results show that most software, including fairly complex server and client applications, can run successfully within our SEEs. It introduces low performance overheads, typically below 10 percent.},
author = {Zhenkai Liang and Weiqing Sun and V. N. Venkatakrishnan and R. Sekar},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:23:38 -0600},
doi = {10.1145/1455526.1455527},
journal = {{ACM} Transactions on Information Systems Security},
keywords = {runtime monitoring;sandboxing;isolated execution;Software installation;},
month = {Jan},
number = 3,
pages = {14:1--14:37},
series = {{TISSEC}'09},
title = {Alcatraz: An Isolated Environment for Experimenting with Untrusted Software},
url = {https://doi.org/10.1145/1455526.1455527},
volume = 12,
year = 2009,
bdsk-url-1 = {https://dblp.org/rec/journals/tissec/LiangSVS09},
bdsk-url-2 = {https://doi.org/10.1145/1455526.1455527},
}
|
|
[25]
|
Preventing Information Leaks through Shadow Executions
(Roberto Capizzi, Antonio Longo, V. N. Venkatakrishnan, A. Prasad Sistla)
Twenty-Fourth Annual Computer Security Applications Conference, ACSAC 2008, Anaheim, California, USA, 8-12 December 2008 (ACSAC'08), pp. 322–331 42 out of 185 submissions accepted, 22.7%
Abstract
A concern about personal information confidentiality typically arises when any desktop application communicates to the external network, for example, to its producer's server for obtaining software version updates. We address this confidentiality concern of end users by an approach called shadow execution. A key property of shadow execution is that it allows applications to successfully communicate over the network while disallowing any information leaks. We describe the design and implementation of this approach for Windows applications. Experiments with our prototype implementation indicate that shadow execution allows applications to execute without inhibiting any behaviors, has acceptable performance overheads while preventing any information leaks.
►bibtex
PDF DOI: 10.1109/ACSAC.2008.50
@inproceedings{Capizzi:ACSAC08,
abstract = {A concern about personal information confidentiality typically arises when any desktop application communicates to the external network, for example, to its producer's server for obtaining software version updates. We address this confidentiality concern of end users by an approach called shadow execution. A key property of shadow execution is that it allows applications to successfully communicate over the network while disallowing any information leaks. We describe the design and implementation of this approach for Windows applications. Experiments with our prototype implementation indicate that shadow execution allows applications to execute without inhibiting any behaviors, has acceptable performance overheads while preventing any information leaks.},
annote = {42 out of 185 submissions accepted, 22.7%},
author = {Roberto Capizzi and Antonio Longo and V. N. Venkatakrishnan and A. Prasad Sistla},
booktitle = {Twenty-Fourth Annual Computer Security Applications Conference, {ACSAC} 2008, Anaheim, California, USA, 8-12 December 2008},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-14 17:34:54 -0600},
doi = {10.1109/ACSAC.2008.50},
keywords = {information flow;non-interference;shadow executions;privacy},
month = {Dec},
pages = {322--331},
publisher = {{IEEE} Computer Society},
series = {ACSAC'08},
title = {Preventing Information Leaks through Shadow Executions},
url = {https://doi.org/10.1109/ACSAC.2008.50},
year = 2008,
bdsk-url-1 = {https://dblp.org/rec/conf/acsac/CapizziLVS08},
bdsk-url-2 = {https://doi.org/10.1109/ACSAC.2008.50},
}
|
|
[24]
|
Enhancing web browser security against malware extensions
(Mike Ter Louw, Jin Soon Lim, V. N. Venkatakrishnan)
J. Comput. Virol. (JCV'08), 4, pp. 179–195
Abstract
In this paper we examine security issues of functionality extension mechanisms supported by web browsers. Extensions (or ``plug-ins'') in modern web browsers enjoy unrestrained access at all times and thus are attractive vectors for malware. To solidify the claim, we take on the role of malware writers looking to assume control of a user's browser space. We have taken advantage of the lack of security mechanisms for browser extensions and implemented a malware application for the popular Firefox web browser, which we call browserSpy, that requires no special privileges to be installed. browserSpy takes complete control of the user's browser space, can observe all activity performed through the browser and is undetectable. We then adopt the role of defenders to discuss defense strategies against such malware. Our primary contribution is a mechanism that uses code integrity checking techniques to control the extension installation and loading process. We describe two implementations of this mechanism: a drop-in solution that employs JavaScript and a faster, in-browser solution that makes uses of the browser's native cryptography implementation. We also discuss techniques for runtime monitoring of extension behavior to provide a foundation for defending threats posed by installed extensions.
►bibtex
PDF DOI: 10.1007/S11416-007-0078-5
@article{Louw:JCV08,
abstract = {In this paper we examine security issues of functionality extension mechanisms supported by web browsers. Extensions (or ``plug-ins'') in modern web browsers enjoy unrestrained access at all times and thus are attractive vectors for malware. To solidify the claim, we take on the role of malware writers looking to assume control of a user's browser space. We have taken advantage of the lack of security mechanisms for browser extensions and implemented a malware application for the popular Firefox web browser, which we call browserSpy, that requires no special privileges to be installed. browserSpy takes complete control of the user's browser space, can observe all activity performed through the browser and is undetectable. We then adopt the role of defenders to discuss defense strategies against such malware. Our primary contribution is a mechanism that uses code integrity checking techniques to control the extension installation and loading process. We describe two implementations of this mechanism: a drop-in solution that employs JavaScript and a faster, in-browser solution that makes uses of the browser's native cryptography implementation. We also discuss techniques for runtime monitoring of extension behavior to provide a foundation for defending threats posed by installed extensions.},
author = {Mike Ter Louw and Jin Soon Lim and V. N. Venkatakrishnan},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:23:58 -0600},
doi = {10.1007/S11416-007-0078-5},
journal = {J. Comput. Virol.},
keywords = {browser security; web security; attacks; browser extension; code integrity},
month = {Aug},
number = 3,
pages = {179--195},
series = {{JCV}'08},
title = {Enhancing web browser security against malware extensions},
url = {https://doi.org/10.1007/s11416-007-0078-5},
volume = 4,
year = 2008,
bdsk-url-1 = {https://dblp.org/rec/journals/virology/LouwLV08},
bdsk-url-2 = {https://doi.org/10.1007/s11416-007-0078-5},
}
|
|
[23]
|
XSS-GUARD: Precise Dynamic Prevention of Cross-Site Scripting Attacks
(Prithvi Bisht, V. N. Venkatakrishnan)
Detection of Intrusions and Malware, and Vulnerability Assessment, 5th International Conference, DIMVA 2008, Paris, France, July 10-11, 2008. Proceedings (Lecture Notes in Computer Science, DIMVA'08), 5137, pp. 23–43 Acceptance 13 out of 42 papers, 31%
Abstract
►bibtex
PDF DOI: 10.1007/978-3-540-70542-0_2
@inproceedings{Bisht:dimva08,
annote = {Acceptance 13 out of 42 papers, 31%},
author = {Prithvi Bisht and V. N. Venkatakrishnan},
booktitle = {Detection of Intrusions and Malware, and Vulnerability Assessment, 5th International Conference, {DIMVA} 2008, Paris, France, July 10-11, 2008. Proceedings},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:48:52 -0600},
doi = {10.1007/978-3-540-70542-0_2},
keywords = {cross-site scripting;content security;program transformation;Program analysis;Code retrofitting;},
month = {July},
pages = {23--43},
publisher = {Springer},
series = {Lecture Notes in Computer Science, DIMVA'08},
title = {{XSS-GUARD:} Precise Dynamic Prevention of Cross-Site Scripting Attacks},
url = {https://doi.org/10.1007/978-3-540-70542-0_2},
volume = 5137,
year = 2008,
bdsk-url-1 = {https://dblp.org/rec/conf/dimva/BishtV08},
bdsk-url-2 = {https://doi.org/10.1007/978-3-540-70542-0_2},
bdsk-url-3 = {https://doi.org/10.1007/978-3-540-70542-0%5C_2},
}
|
|
[22]
|
Expanding Malware Defense by Securing Software Installations
(Weiqing Sun, R. Sekar, Zhenkai Liang, V. N. Venkatakrishnan)
Detection of Intrusions and Malware, and Vulnerability Assessment, 5th International Conference, DIMVA 2008, Paris, France, July 10-11, 2008. Proceedings (Lecture Notes in Computer Science, DIMVA'08), 5137, pp. 164–185 Acceptance 13 out of 42 papers, 31%
Abstract
Software installation provides an attractive entry vector for malware: since installations are performed with administrator privileges, malware can easily get the enhanced level of access needed to install backdoors, spyware, rootkits, or ``bot'' software, and to hide these installations from users. Previous research has been focused mainly on securing the execution phase of untrusted software, while largely ignoring the safety of installations. Even security-enhanced operating systems such as SELinux and Vista don't usually impose restrictions during software installs, expecting the system administrator to ``know what she is doing.'' This paper addresses this ``gap in armor'' by securing software installations. Our technique can support a diversity of package managers and software installers. It is based on a framework that simplifies the development and enforcement of policies that govern safety of installations. We present a simple policy that can be used to prevent untrusted software from modifying any of the files used by benign software packages, thus blocking the most common mechanism used by malware to ensure that it is run automatically after each system reboot. While the scope of our technique is limited to the installation phase, it can be easily combined with approaches for secure execution, e.g., by ensuring that all future runs of an untrusted package will take place within an administrator-specified sandbox. Our experimental evaluation has considered over one hundred benign and untrusted software packages. Our technique was able to block malicious packages among these without breaking non-malicious ones.
►bibtex
PDF DOI: 10.1007/978-3-540-70542-0_9
@inproceedings{Sun:dimva08,
abstract = {Software installation provides an attractive entry vector for malware: since installations are performed with administrator privileges, malware can easily get the enhanced level of access needed to install backdoors, spyware, rootkits, or ``bot'' software, and to hide these installations from users. Previous research has been focused mainly on securing the execution phase of untrusted software, while largely ignoring the safety of installations. Even security-enhanced operating systems such as SELinux and Vista don't usually impose restrictions during software installs, expecting the system administrator to ``know what she is doing.'' This paper addresses this ``gap in armor'' by securing software installations. Our technique can support a diversity of package managers and software installers. It is based on a framework that simplifies the development and enforcement of policies that govern safety of installations. We present a simple policy that can be used to prevent untrusted software from modifying any of the files used by benign software packages, thus blocking the most common mechanism used by malware to ensure that it is run automatically after each system reboot. While the scope of our technique is limited to the installation phase, it can be easily combined with approaches for secure execution, e.g., by ensuring that all future runs of an untrusted package will take place within an administrator-specified sandbox. Our experimental evaluation has considered over one hundred benign and untrusted software packages. Our technique was able to block malicious packages among these without breaking non-malicious ones.},
annote = {Acceptance 13 out of 42 papers, 31%},
author = {Weiqing Sun and R. Sekar and Zhenkai Liang and V. N. Venkatakrishnan},
booktitle = {Detection of Intrusions and Malware, and Vulnerability Assessment, 5th International Conference, {DIMVA} 2008, Paris, France, July 10-11, 2008. Proceedings},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:49:13 -0600},
doi = {10.1007/978-3-540-70542-0_9},
keywords = {Software installation;isolated execution;sandboxing;runtime monitoring},
month = {July},
pages = {164--185},
publisher = {Springer},
series = {Lecture Notes in Computer Science, DIMVA'08},
title = {Expanding Malware Defense by Securing Software Installations},
url = {https://doi.org/10.1007/978-3-540-70542-0_9},
volume = 5137,
year = 2008,
bdsk-url-1 = {https://dblp.org/rec/conf/dimva/SunSLV08},
bdsk-url-2 = {https://doi.org/10.1007/978-3-540-70542-0_9},
bdsk-url-3 = {https://doi.org/10.1007/978-3-540-70542-0%5C_9},
}
|
|
[21]
|
Analysis of Hypertext Isolation Techniques for XSS Prevention
(Mike Ter Louw, Prithvi Bisht, V. N. Venkatakrishnan)
Workshop on Web 2.0 Security and Privacy (W2SP'08) Acceptance rate: 14 out of 45 submissions, 31%
Abstract
Modern websites and web applications commonly integrate third-party and user-generated content to enrich the user's experience. Developers of these applications are in need of a simple way to limit the capabilities of this less trusted, outsourced web content and thereby protect their users from cross-site scripting attacks. We summarize several recent proposals that enable developers to isolate untrusted hypertext, and could be used to define robust constraint environments that are enforceable by web browsers. A comparative analysis of these proposals is presented highlighting security, legacy browser compatibility and several other important qualities.
►bibtex
@inproceedings{Louw:w2sp08,
abstract = {Modern websites and web applications commonly integrate third-party and user-generated content to enrich the user's experience. Developers of these applications are in need of a simple way to limit the capabilities of this less trusted, outsourced web content and thereby protect their users from cross-site scripting attacks. We summarize several recent proposals that enable developers to isolate untrusted hypertext, and could be used to define robust constraint environments that are enforceable by web browsers. A comparative analysis of these proposals is presented highlighting security, legacy browser compatibility and several other important qualities.},
address = {Oakland, CA, USA},
annote = {Acceptance rate: 14 out of 45 submissions, 31%},
author = {Mike Ter Louw and Prithvi Bisht and V. N. Venkatakrishnan},
booktitle = {Workshop on Web 2.0 Security and Privacy},
date-added = {2026-02-14 15:20:27 -0600},
date-modified = {2026-02-14 17:26:48 -0600},
keywords = {browser security; web security; cross-site scripting; content security},
month = {May},
series = {W2SP'08},
title = {Analysis of Hypertext Isolation Techniques for XSS Prevention},
year = 2008,
}
|
|
[20]
|
CMV: automatic verification of complete mediation for java virtual machines
(A. Prasad Sistla, V. N. Venkatakrishnan, Michelle Zhou, Hilary Branske)
Proceedings of the 2008 ACM Symposium on Information, Computer and Communications Security, ASIACCS 2008, Tokyo, Japan, March 18-20, 2008 (ASIACCS'08), pp. 100–111 Acceptance rate: 32 out of 181 regular submissions, 18%
Abstract
Runtime monitoring systems play an important role in system security, and verification efforts that ensure that these systems satisfy certain desirable security properties are growing in importance. One such security property is complete mediation, which requires that sensitive operations are performed by a piece of code only after the monitoring system authorizes these actions. In this paper, we describe a verification technique that is designed to check for the satisfaction of this property directly on code from Java standard libraries. We describe a tool CMV that implements this technique and automatically checks shrink-wrapped Java bytecode for the complete mediation property. Experimental results on running our tool over several thousands of lines of bytecode from the Java libraries suggest that our approach is scalable, and leads to a very significant reduction in human efforts required for system verification.
►bibtex
PDF DOI: 10.1145/1368310.1368327
@inproceedings{Sistla:asiaccs08,
abstract = {Runtime monitoring systems play an important role in system security, and verification efforts that ensure that these systems satisfy certain desirable security properties are growing in importance. One such security property is complete mediation, which requires that sensitive operations are performed by a piece of code only after the monitoring system authorizes these actions. In this paper, we describe a verification technique that is designed to check for the satisfaction of this property directly on code from Java standard libraries. We describe a tool CMV that implements this technique and automatically checks shrink-wrapped Java bytecode for the complete mediation property. Experimental results on running our tool over several thousands of lines of bytecode from the Java libraries suggest that our approach is scalable, and leads to a very significant reduction in human efforts required for system verification.},
annote = {Acceptance rate: 32 out of 181 regular submissions, 18%},
author = {A. Prasad Sistla and V. N. Venkatakrishnan and Michelle Zhou and Hilary Branske},
booktitle = {Proceedings of the 2008 {ACM} Symposium on Information, Computer and Communications Security, {ASIACCS} 2008, Tokyo, Japan, March 18-20, 2008},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-15 09:59:16 -0600},
doi = {10.1145/1368310.1368327},
keywords = {verification;model checking;runtime monitoring;stack inspection;Program analysis; formal methods},
month = {Mar},
pages = {100--111},
publisher = {{ACM}},
series = {ASIACCS'08},
title = {{CMV:} automatic verification of complete mediation for java virtual machines},
url = {https://doi.org/10.1145/1368310.1368327},
year = 2008,
bdsk-url-1 = {https://dblp.org/rec/conf/ccs/SistlaVZB08},
bdsk-url-2 = {https://doi.org/10.1145/1368310.1368327},
}
|
|
[19]
|
CANDID: preventing sql injection attacks using dynamic candidate evaluations
(Sruthi Bandhakavi, Prithvi Bisht, P. Madhusudan, V. N. Venkatakrishnan)
Proceedings of the 2007 ACM Conference on Computer and Communications Security, CCS 2007, Alexandria, Virginia, USA, October 28-31, 2007ACM Conference on Computer and Communications Security, CCS 2007, Alexandria, Virginia, USA, October 28-31, 2007 (CCS'07), pp. 12–24 Acceptance rate: 55 out of 303 Submissions, 18%
Abstract
SQL injection attacks are one of the topmost threats for applications written for the Web. These attacks are launched through specially crafted user input on web applications that use low level string operations to construct SQL queries. In this work, we exhibit a novel and powerful scheme for automatically transforming web applications to render them safe against all SQL injection attacks. A characteristic diagnostic feature of SQL injection attacks is that they change the intended structure of queries issued. Our technique for detecting SQL injection is to dynamically mine the programmer-intended query structure on any input, and to detect attacks by comparing them against the intended query structure. We propose a simple and novel mechanism, called Candid, for mining programmer intended queries by dynamically evaluating runs over benign candidate inputs. This mechanism is theoretically well founded and is based on inferring intended queries by considering the symbolic query computed on a program run. Our approach has been implemented in a tool called Candid that retrofits Web applications written in Java to defend them against SQL injection attacks. We report extensive experimental results that show that our approach performs remarkably well in practice.
►bibtex
PDF DOI: 10.1145/1315245.1315249
@inproceedings{Bandhakavti:ccs07,
abstract = {SQL injection attacks are one of the topmost threats for applications written for the Web. These attacks are launched through specially crafted user input on web applications that use low level string operations to construct SQL queries. In this work, we exhibit a novel and powerful scheme for automatically transforming web applications to render them safe against all SQL injection attacks. A characteristic diagnostic feature of SQL injection attacks is that they change the intended structure of queries issued. Our technique for detecting SQL injection is to dynamically mine the programmer-intended query structure on any input, and to detect attacks by comparing them against the intended query structure. We propose a simple and novel mechanism, called Candid, for mining programmer intended queries by dynamically evaluating runs over benign candidate inputs. This mechanism is theoretically well founded and is based on inferring intended queries by considering the symbolic query computed on a program run. Our approach has been implemented in a tool called Candid that retrofits Web applications written in Java to defend them against SQL injection attacks. We report extensive experimental results that show that our approach performs remarkably well in practice.},
annote = {Acceptance rate: 55 out of 303 Submissions, 18%},
author = {Sruthi Bandhakavi and Prithvi Bisht and P. Madhusudan and V. N. Venkatakrishnan},
booktitle = {Proceedings of the 2007 {ACM} Conference on Computer and Communications Security, {CCS} 2007, Alexandria, Virginia, USA, October 28-31, 2007ACM Conference on Computer and Communications Security, {CCS} 2007, Alexandria, Virginia, USA, October 28-31, 2007},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-14 17:26:22 -0600},
doi = {10.1145/1315245.1315249},
keywords = {SQL injection;Program analysis;program transformation;Code retrofitting;symbolic evaluation;runtime monitoring},
month = {Oct},
pages = {12--24},
publisher = {{ACM}},
series = {CCS'07},
title = {CANDID: preventing sql injection attacks using dynamic candidate evaluations},
url = {https://doi.org/10.1145/1315245.1315249},
year = 2007,
bdsk-url-1 = {https://dblp.org/rec/conf/ccs/BandhakaviBMV07},
bdsk-url-2 = {https://doi.org/10.1145/1315245.1315249},
}
|
|
[18]
|
Extensible Web Browser Security
(Mike Ter Louw, Jin Soon Lim, V. N. Venkatakrishnan)
Detection of Intrusions and Malware, and Vulnerability Assessment, 4th International Conference, DIMVA 2007, Lucerne, Switzerland, July 12-13, 2007, Proceedings (Lecture Notes in Computer Science, DIMVA'07), 4579, pp. 1–19 Acceptance rate: 14 out of 57 submissions, 24.5%
Abstract
In this paper we examine the security issues in functionality extension mechanisms supported by web browsers. Extensions (or ``plug-ins'') in modern web browsers enjoy unlimited power without restraint and thus are attractive vectors for malware. To solidify the claim, we take on the role of malware writers looking to assume control of a user's browser space. We have taken advantage of the lack of security mechanisms for browser extensions and have implemented a piece of malware for the popular Firefox web browser, which we call browserSpy, that requires no special privileges to be installed. Once installed, browserSpy takes complete control of a user's browser space and can observe all the activity performed through the browser while being undetectable. We then adopt the role of defenders to discuss defense strategies against such malware. Our primary contribution is a mechanism that uses code integrity checking techniques to control the extension installation and loading process. We also discuss techniques for runtime monitoring of extension behavior that provide a foundation for defending threats due to installed extensions.
►bibtex
PDF DOI: 10.1007/978-3-540-73614-1_1
@inproceedings{Louw:dimva07,
abstract = {In this paper we examine the security issues in functionality extension mechanisms supported by web browsers. Extensions (or ``plug-ins'') in modern web browsers enjoy unlimited power without restraint and thus are attractive vectors for malware. To solidify the claim, we take on the role of malware writers looking to assume control of a user's browser space. We have taken advantage of the lack of security mechanisms for browser extensions and have implemented a piece of malware for the popular Firefox web browser, which we call browserSpy, that requires no special privileges to be installed. Once installed, browserSpy takes complete control of a user's browser space and can observe all the activity performed through the browser while being undetectable. We then adopt the role of defenders to discuss defense strategies against such malware. Our primary contribution is a mechanism that uses code integrity checking techniques to control the extension installation and loading process. We also discuss techniques for runtime monitoring of extension behavior that provide a foundation for defending threats due to installed extensions.},
annote = {Acceptance rate: 14 out of 57 submissions, 24.5%},
author = {Mike Ter Louw and Jin Soon Lim and V. N. Venkatakrishnan},
booktitle = {Detection of Intrusions and Malware, and Vulnerability Assessment, 4th International Conference, {DIMVA} 2007, Lucerne, Switzerland, July 12-13, 2007, Proceedings},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:49:30 -0600},
doi = {10.1007/978-3-540-73614-1_1},
keywords = {browser security; web security; attacks; browser extension; code integrity},
month = {July},
pages = {1--19},
publisher = {Springer},
series = {Lecture Notes in Computer Science, DIMVA'07},
title = {Extensible Web Browser Security},
url = {https://doi.org/10.1007/978-3-540-73614-1_1},
volume = 4579,
year = 2007,
bdsk-url-1 = {https://dblp.org/rec/conf/dimva/LouwLV07},
bdsk-url-2 = {https://doi.org/10.1007/978-3-540-73614-1_1},
bdsk-url-3 = {https://doi.org/10.1007/978-3-540-73614-1%5C_1},
}
|
|
[17]
|
A Comparative Study of Three Random Password Generators
(Michael Leonhard, V. N. Venkatakrishnan)
IEEE Conference on Information Technology (EIT07 (EIT'07), pp. 227-232
Abstract
This paper compares three random password generation schemes, describing and analyzing each. It also reports the results of a small study testing the quality of the passwords generated by the schemes. Qualities discussed include security, memorability, and user affinity. Improvements to the schemes and experiment are suggested.
►bibtex
@inproceedings{Leonhard:EIT07,
abstract = {This paper compares three random password generation schemes, describing and analyzing each. It also reports the results of a small study testing the quality of the passwords generated by the schemes. Qualities discussed include security, memorability, and user affinity. Improvements to the schemes and experiment are suggested.},
address = {Chicago, IL, USA},
author = {Michael Leonhard and V. N. Venkatakrishnan},
booktitle = {IEEE Conference on Information Technology (EIT07},
date-added = {2026-02-14 14:39:35 -0600},
date-modified = {2026-02-14 17:36:46 -0600},
keywords = {Passwords; Authentication;},
month = {May},
pages = {227-232},
series = {EIT'07},
title = {A Comparative Study of Three Random Password Generators},
year = 2007,
}
|
|
[16]
|
Data Sandboxing: A Technique for Enforcing Confidentiality Policies
(Tejas Khatiwala, Raj Swaminathan, V. N. Venkatakrishnan)
22nd Annual Computer Security Applications Conference (ACSAC 2006), 11-15 December 2006, Miami Beach, Florida, USA (ACSAC'06), pp. 223–234 Acceptance rate: 32 out of 135 submissions, 26.5%
Abstract
When an application reads private / sensitive information and subsequently communicates on an output channel such as a public file or a network connection, how can we ensure that the data written is free of private information? In this paper, we address this question in a practical setting through the use of a technique that we call ``data sandboxing'' . Essentially, data sandboxing is implemented using the popular technique of system call interposition to mediate output channels used by a program. To distinguish between private and public data, the program is partitioned into two: one that contains all the instructions that handle sensitive data and the other containing the rest of the instructions. This partitioning is performed based on techniques from program slicing. When run together, these two programs collectively replace the original program. To address confidentiality, these programs are sandboxed with different system call interposition based policies. We discuss the design and implementation of a tool that enforces confidentiality policies on C programs using this technique. We also report our experiences in using our tool over several programs that handle confidential data.
►bibtex
PDF DOI: 10.1109/ACSAC.2006.22
@inproceedings{Khatiwala:ACSAC06,
abstract = {When an application reads private / sensitive information and subsequently communicates on an output channel such as a public file or a network connection, how can we ensure that the data written is free of private information? In this paper, we address this question in a practical setting through the use of a technique that we call ``data sandboxing'' . Essentially, data sandboxing is implemented using the popular technique of system call interposition to mediate output channels used by a program. To distinguish between private and public data, the program is partitioned into two: one that contains all the instructions that handle sensitive data and the other containing the rest of the instructions. This partitioning is performed based on techniques from program slicing. When run together, these two programs collectively replace the original program. To address confidentiality, these programs are sandboxed with different system call interposition based policies. We discuss the design and implementation of a tool that enforces confidentiality policies on C programs using this technique. We also report our experiences in using our tool over several programs that handle confidential data.},
annote = {Acceptance rate: 32 out of 135 submissions, 26.5%},
author = {Tejas Khatiwala and Raj Swaminathan and V. N. Venkatakrishnan},
booktitle = {22nd Annual Computer Security Applications Conference {(ACSAC} 2006), 11-15 December 2006, Miami Beach, Florida, {USA}},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-14 17:25:49 -0600},
doi = {10.1109/ACSAC.2006.22},
keywords = {runtime monitoring;sandboxing;information flow;Security policies;system call monitoring;confidentiality},
month = {Dec},
pages = {223--234},
series = {ACSAC'06},
title = {Data Sandboxing: {A} Technique for Enforcing Confidentiality Policies},
url = {https://doi.org/10.1109/ACSAC.2006.22},
year = 2006,
bdsk-url-1 = {https://dblp.org/rec/conf/acsac/KhatiwalaSV06},
bdsk-url-2 = {https://doi.org/10.1109/ACSAC.2006.22},
}
|
|
[15]
|
Provably Correct Runtime Enforcement of Non-interference Properties
(V. N. Venkatakrishnan, Wei Xu, Daniel C. DuVarney, R. Sekar)
Information and Communications Security, 8th International Conference, ICICS 2006, Raleigh, NC, USA, December 4-7, 2006, Proceedings (Lecture Notes in Computer Science, ICISS06), pp. 332–351 Acceptance rate: 40 out of 122 submissions, 32%
Abstract
Non-interference has become the standard criterion for ensuring confidentiality of sensitive data in the information flow literature. However, application of non-interference to software systems has been limited in practice. This is partly due to the imprecision that is inherent in static analyses that have formed the basis of previous non-interference based techniques. Runtime approaches can be significantly more accurate than static analysis, and have been more successful in practical systems that reason about information flow. However, these techniques only reason about explicit information flows that take place via assignments in a program. Implicit flows that take place without involving assignments, and can be inferred from the structure and/or semantics of the program, are missed by runtime techniques. This paper seeks to bridge the gap between the accuracy provided by runtime techniques and the completeness provided by static analysis techniques. In particular, we develop a hybrid technique that relies primarily on runtime information-flow tracking, but augments it with static analysis to reason about implicit flows that arise due to unexecuted paths in a program. We prove that the resulting technique preserves non-interference.
►bibtex
PDF DOI: 10.1007/11935308_24
@inproceedings{Venkatakrishnan:iciss06,
abstract = {Non-interference has become the standard criterion for ensuring confidentiality of sensitive data in the information flow literature. However, application of non-interference to software systems has been limited in practice. This is partly due to the imprecision that is inherent in static analyses that have formed the basis of previous non-interference based techniques. Runtime approaches can be significantly more accurate than static analysis, and have been more successful in practical systems that reason about information flow. However, these techniques only reason about explicit information flows that take place via assignments in a program. Implicit flows that take place without involving assignments, and can be inferred from the structure and/or semantics of the program, are missed by runtime techniques. This paper seeks to bridge the gap between the accuracy provided by runtime techniques and the completeness provided by static analysis techniques. In particular, we develop a hybrid technique that relies primarily on runtime information-flow tracking, but augments it with static analysis to reason about implicit flows that arise due to unexecuted paths in a program. We prove that the resulting technique preserves non-interference.},
address = {Raleigh, NC},
annote = {Acceptance rate: 40 out of 122 submissions, 32%},
author = {V. N. Venkatakrishnan and Wei Xu and Daniel C. DuVarney and R. Sekar},
booktitle = {Information and Communications Security, 8th International Conference, {ICICS} 2006, Raleigh, NC, USA, December 4-7, 2006, Proceedings},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:49:49 -0600},
doi = {10.1007/11935308_24},
keywords = {information flow;runtime monitoring; Program analysis;program transformation;Security policies;non-interference},
month = {Dec},
pages = {332--351},
publisher = {Springer},
series = {Lecture Notes in Computer Science, ICISS06},
title = {Provably Correct Runtime Enforcement of Non-interference Properties},
url = {https://doi.org/10.1007/11935308_24},
year = 2006,
bdsk-url-1 = {https://dblp.org/rec/conf/icics/VenkatakrishnanXDS06},
bdsk-url-2 = {https://doi.org/10.1007/11935308_24},
bdsk-url-3 = {https://doi.org/10.1007/11935308%5C_24},
}
|
|
[14]
|
SUEZ: A Distributed Safe Execution Environment for System Administration Trials
(Doo San Sim, V. N. Venkatakrishnan)
Proceedings of the 20th Conference on Systems Administration (LISA 2006), Washington, DC, USA, December 3-8, 2006 (LISA'06), pp. 161–173
Abstract
►bibtex
PDF
@inproceedings{Sim:lisa06,
address = {Washington D.C., USA},
author = {Doo San Sim and V. N. Venkatakrishnan},
booktitle = {Proceedings of the 20th Conference on Systems Administration {(LISA} 2006), Washington, DC, USA, December 3-8, 2006},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-14 10:15:08 -0600},
keywords = {Software installation;sandboxing;runtime monitoring;Security policies},
month = {December},
pages = {161--173},
publisher = {{USENIX}},
series = {LISA'06},
title = {{SUEZ:} {A} Distributed Safe Execution Environment for System Administration Trials},
url = {http://www.usenix.org/events/lisa06/tech/sim.html},
year = 2006,
bdsk-url-1 = {https://dblp.org/rec/conf/lisa/SimV06},
bdsk-url-2 = {http://www.usenix.org/events/lisa06/tech/sim.html},
}
|
|
[13]
|
A Framework for Building Privacy-Conscious Composite Web Services
(Wei Xu, V. N. Venkatakrishnan, R. Sekar, I. V. Ramakrishnan)
2006 IEEE International Conference on Web Services (ICWS 2006), 18-22 September 2006, Chicago, Illinois, USA (ICWS'06), pp. 655–662 Acceptance rate: 17%
Abstract
The rapid growth of web applications has prompted increasing interest in the area of composite web services that involve several service providers. The potential for such composite web services can be realized only if consumer privacy concerns are satisfactorily addressed. In this paper, we propose a framework that addresses consumer privacy concerns in the context of highly customizable composite web services. Our approach involves service producers exchanging their terms-of-use with consumers in the form of "models". Our framework provides automated techniques for checking these models at the consumer site for compliance of consumer privacy policies. In the event of a policy violation, our framework supports automatic generation of "obligations" that the consumer generates for the composite service. These obligations are automatically enforced through a dynamic program analysis approach on the web service composition code. We illustrate our approach with the implementation of two example services.
►bibtex
PDF DOI: 10.1109/ICWS.2006.4
@inproceedings{Xu:icws06,
abstract = {The rapid growth of web applications has prompted increasing interest in the area of composite web services that involve several service providers. The potential for such composite web services can be realized only if consumer privacy concerns are satisfactorily addressed. In this paper, we propose a framework that addresses consumer privacy concerns in the context of highly customizable composite web services. Our approach involves service producers exchanging their terms-of-use with consumers in the form of "models". Our framework provides automated techniques for checking these models at the consumer site for compliance of consumer privacy policies. In the event of a policy violation, our framework supports automatic generation of "obligations" that the consumer generates for the composite service. These obligations are automatically enforced through a dynamic program analysis approach on the web service composition code. We illustrate our approach with the implementation of two example services.},
address = {Chicago, IL, USA},
annote = {Acceptance rate: 17%},
author = {Wei Xu and V. N. Venkatakrishnan and R. Sekar and I. V. Ramakrishnan},
booktitle = {2006 {IEEE} International Conference on Web Services {(ICWS} 2006), 18-22 September 2006, Chicago, Illinois, {USA}},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-14 17:24:57 -0600},
doi = {10.1109/ICWS.2006.4},
keywords = {web service;Security policies;},
month = {September},
pages = {655--662},
publisher = {{IEEE} Computer Society},
series = {ICWS'06},
title = {A Framework for Building Privacy-Conscious Composite Web Services},
url = {https://doi.org/10.1109/ICWS.2006.4},
year = 2006,
bdsk-url-1 = {https://dblp.org/rec/conf/icws/XuVSR06},
bdsk-url-2 = {https://doi.org/10.1109/ICWS.2006.4},
}
|
|
[12]
|
Programming language based analysis for lifting to an operating system's access control model
(Jon Solworth, V. N. Venkatakrishnan)
ECOOP Workshop on Programming Languages and Operating Systems (ECOOP'05)
Abstract
Traditionally, operating systems have employed Discretionary Access Controls (DACs) as their authorization model. We have implemented a new authorization model called kernelsec, which provides a much richer set of controls than DACs and thus provides more refined protections. In this paper, we describe a programming language-based analysis to ``lift'' the existing application base from the Linux DAC to the kernelsec authorization model. The authorization model, which is part of the Operating System (OS) Kernel, controls a process's external interaction by allowing or denying operations requested by the process. In a traditional DAC authorization model, the user who created an object, such as a file, is the owner of that object and determines who can access the object. Unfortunately, DAC models are incapable of providing strong security properties, leading to a culture of ``blame the user'' when something goes wrong. An alternative is to provide Mandatory Access Controls (MACs) which can enforce organizational rules We note that Role-Based Access Controls (RBAC) and SPBAC can provide either DAC or MAC. MACs can be integrated with DACs by allowing the user to decide permissions that are not mandated by the organization. This paper explores some of the issues in transitioning from a DAC-based to a MAC-based authorization model and the role compilers and programming languages can play.
►bibtex
@inproceedings{Solworth:ecoop05,
abstract = {Traditionally, operating systems have employed Discretionary Access Controls (DACs) as their authorization model. We have implemented a new authorization model called kernelsec, which provides a much richer set of controls than DACs and thus provides more refined protections. In this paper, we describe a programming language-based analysis to ``lift'' the existing application base from the Linux DAC to the kernelsec authorization model. The authorization model, which is part of the Operating System (OS) Kernel, controls a process's external interaction by allowing or denying operations requested by the process. In a traditional DAC authorization model, the user who created an object, such as a file, is the owner of that object and determines who can access the object. Unfortunately, DAC models are incapable of providing strong security properties, leading to a culture of ``blame the user'' when something goes wrong. An alternative is to provide Mandatory Access Controls (MACs) which can enforce organizational rules We note that Role-Based Access Controls (RBAC) and SPBAC can provide either DAC or MAC. MACs can be integrated with DACs by allowing the user to decide permissions that are not mandated by the organization. This paper explores some of the issues in transitioning from a DAC-based to a MAC-based authorization model and the role compilers and programming languages can play.},
address = {Glasgow, UK},
author = {Jon Solworth and V. N. Venkatakrishnan},
booktitle = {ECOOP Workshop on Programming Languages and Operating Systems},
date-added = {2026-02-14 10:34:06 -0600},
date-modified = {2026-02-14 10:42:22 -0600},
keywords = {Program analysis; operating systems},
month = {July},
series = {ECOOP'05},
title = {Programming language based analysis for lifting to an operating system's access control model},
year = 2005,
}
|
|
[11]
|
An Approach for Realizing Privacy-Preserving Web-Based Services
(Wei Xu, R. Sekar, I. V. Ramakrishnan, V. N. Venkatakrishnan)
Special Interest Tracks and Posters of the 14th International Conference on World Wide Web (WWW '05), pp. 1014–1015
Abstract
We present a new approach where the consumers as well as providers can express their privacy concerns in a formal way. Specifically, consumers express their requirements in the form of policies, while providers specify their use of consumer data using models. Our approach automates compatibility checking between policies and models. If there is an incompatibility, the consumer is informed how she can refine her policies in order to use the service. If she does not want to change her policies in any way, the approach passes on additional privacy requirements to the provider. Service access can continue in case of incompatibilities only if the consumer relaxes her policies, or the provider honors additional consumer privacy requirements. The key idea behind our approach is a judicial combination of trust (on service providers to accurately specify use of consumer data) and verification (for compatibility resolution). This combination enables our approach to support privacy preservation without requiring access to proprietary code that implements the service.
►bibtex
PDF DOI: 10.1145/1062745.1062845
@inproceedings{Xu:WWW05,
abstract = {We present a new approach where the consumers as well as providers can express their privacy concerns in a formal way. Specifically, consumers express their requirements in the form of policies, while providers specify their use of consumer data using models. Our approach automates compatibility checking between policies and models. If there is an incompatibility, the consumer is informed how she can refine her policies in order to use the service. If she does not want to change her policies in any way, the approach passes on additional privacy requirements to the provider. Service access can continue in case of incompatibilities only if the consumer relaxes her policies, or the provider honors additional consumer privacy requirements. The key idea behind our approach is a judicial combination of trust (on service providers to accurately specify use of consumer data) and verification (for compatibility resolution). This combination enables our approach to support privacy preservation without requiring access to proprietary code that implements the service.},
address = {Chiba, Japan},
author = {Xu, Wei and Sekar, R. and Ramakrishnan, I. V. and Venkatakrishnan, V. N.},
booktitle = {Special Interest Tracks and Posters of the 14th International Conference on World Wide Web},
date-added = {2023-02-19 11:10:34 -0600},
date-modified = {2026-02-14 10:10:26 -0600},
doi = {10.1145/1062745.1062845},
keywords = {web service, privacy, information flow},
location = {Chiba, Japan},
month = {May},
pages = {1014--1015},
series = {WWW '05},
title = {An Approach for Realizing Privacy-Preserving Web-Based Services},
url = {https://doi.org/10.1145/1062745.1062845},
year = 2005,
bdsk-url-1 = {https://doi.org/10.1145/1062745.1062845},
}
|
|
[10]
|
A secure composition framework for trustworthy personal information assistants
(V. N. Venkatakrishnan, Wei Xu, I. V. Ramakrishnan, R. Sekar)
International Conference on Integration of Knowledge Intensive Multi-Agent Systems, 2005. (KIMAS'05), pp. 561-566
Abstract
In this paper, we provide a framework that supports composition of individual agents that enables users to accomplish complex tasks that would otherwise be laborious and difficult with mere use of traditional keyword based search engines. A key benefit of our approach is that in the framework the personal information handled by the agent system is guaranteed to be free from accidental leakage to Websites that are not trustworthy, thereby ensuring the privacy of end-user data. We describe our approach with a prototype example which suggests that such highly usable, trustworthy agent systems can be built and deployed quickly with modest implementation efforts.
►bibtex
DOI: 10.1109/KIMAS.2005.1427144
@inproceedings{Venkatakrishnan:KIMAS05,
abstract = {In this paper, we provide a framework that supports composition of individual agents that enables users to accomplish complex tasks that would otherwise be laborious and difficult with mere use of traditional keyword based search engines. A key benefit of our approach is that in the framework the personal information handled by the agent system is guaranteed to be free from accidental leakage to Websites that are not trustworthy, thereby ensuring the privacy of end-user data. We describe our approach with a prototype example which suggests that such highly usable, trustworthy agent systems can be built and deployed quickly with modest implementation efforts.},
address = {Waltham, MA, USA},
author = {V. N. Venkatakrishnan and Wei Xu and I. V. Ramakrishnan and R. Sekar},
booktitle = {International Conference on Integration of Knowledge Intensive Multi-Agent Systems, 2005.},
date-added = {2023-02-19 11:14:42 -0600},
date-modified = {2026-02-16 23:24:17 -0600},
doi = {10.1109/KIMAS.2005.1427144},
keywords = {web service, Privacy, information flow},
month = {Apr},
pages = {561-566},
series = {{KIMAS}'05},
title = {A secure composition framework for trustworthy personal information assistants},
year = 2005,
bdsk-url-1 = {https://doi.org/10.1109/KIMAS.2005.1427144},
}
|
|
[9]
|
One-Way Isolation: An Effective Approach for Realizing Safe Execution Environments
(Weiqing Sun, Zhenkai Liang, V. N. Venkatakrishnan, R. Sekar)
Proceedings of the Network and Distributed System Security Symposium, NDSS 2005, San Diego, California, USA (NDSS'05) Acceptance rate: 13%
Abstract
In this paper, we present an approach for realizing a safe execution environment (SEE) that enables users to ``try out'' new software (or configuration changes to existing software) without the fear of damaging the system in any manner. A key property of our SEE is that it faithfully reproduces the behavior of applications, as if they were running natively on the underlying host operating system. This is accomplished via one-way isolation: processes running within the SEE are given read-access to the environment provided by the host OS, but their write operations are prevented from escaping outside the SEE. As a result, SEE processes cannot impact the behavior of host OS processes, or the integrity of data on the host OS. Our SEE supports a wide range of tasks, including: study of malicious code, controlled execution of untrusted software, experimentation with software configuration changes, testing of software patches, and so on. It provides a convenient way for users to inspect system changes made within the SEE. If the user does not accept these changes, they can be rolled back at the click of a button. Otherwise, the changes can be ``committed '' so as to become visible outside the SEE. We provide consistency criteria that ensure semantic consistency of the committed results. We also develop an efficient technique for implementing the commit operation. Our implementation results show that most software, including fairly complex server and client applications, can run successfully within the SEE. The approach introduces low performance overheads, typically below 10%.
►bibtex
PDF
@inproceedings{Sun:ndss05,
abstract = {In this paper, we present an approach for realizing a safe execution environment (SEE) that enables users to ``try out'' new software (or configuration changes to existing software) without the fear of damaging the system in any manner. A key property of our SEE is that it faithfully reproduces the behavior of applications, as if they were running natively on the underlying host operating system. This is accomplished via one-way isolation: processes running within the SEE are given read-access to the environment provided by the host OS, but their write operations are prevented from escaping outside the SEE. As a result, SEE processes cannot impact the behavior of host OS processes, or the integrity of data on the host OS. Our SEE supports a wide range of tasks, including: study of malicious code, controlled execution of untrusted software, experimentation with software configuration changes, testing of software patches, and so on. It provides a convenient way for users to inspect system changes made within the SEE. If the user does not accept these changes, they can be rolled back at the click of a button. Otherwise, the changes can be ``committed '' so as to become visible outside the SEE. We provide consistency criteria that ensure semantic consistency of the committed results. We also develop an efficient technique for implementing the commit operation. Our implementation results show that most software, including fairly complex server and client applications, can run successfully within the SEE. The approach introduces low performance overheads, typically below 10%.},
annote = {Acceptance rate: 13%},
author = {Weiqing Sun and Zhenkai Liang and V. N. Venkatakrishnan and R. Sekar},
booktitle = {Proceedings of the Network and Distributed System Security Symposium, {NDSS} 2005, San Diego, California, {USA}},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-16 23:04:47 -0600},
keywords = {sandboxing;isolated execution;operating systems;Security policies},
month = {Feb},
publisher = {The Internet Society},
series = {{NDSS}'05},
title = {One-Way Isolation: An Effective Approach for Realizing Safe Execution Environments},
url = {https://www.ndss-symposium.org/ndss2005/one-way-isolation-effective-approach-realizing-safe-execution-environments/},
year = 2005,
bdsk-url-1 = {https://dblp.org/rec/conf/ndss/SunLVS05},
bdsk-url-2 = {https://www.ndss-symposium.org/ndss2005/one-way-isolation-effective-approach-realizing-safe-execution-environments/},
}
|
|
[7]
|
Isolated Program Execution: An Application Transparent Approach for Executing Untrusted Programs
(Zhenkai Liang, V. N. Venkatakrishnan, R. Sekar)
19th Annual Computer Security Applications Conference (ACSAC 2003), 8-12 December 2003, Las Vegas, NV, USA (SOSP'03), pp. 182–191 Best Paper Award!
Abstract
In this paper, we present a new approach for safe execution of untrusted programs by isolating their effects fromthe rest of the system. Isolation is achieved by interceptingfile operations made by untrusted processes, and redirecting any change operations to a "modification cachel" thatis invisible to other processes in the system. File read operations performed by the untrusted process are also correspondingly modified, so that the process has a consistentview of system state that incorporates the contents of the filesystem as well as the modification cache. On termination ofthe untrusted process, its user is presented with a concisesummary of the files modified by the process. Additionally,the user can inspect these files using various software utilities (e.g., helper applications to view multimedia files) todetermine if the modifications are acceptable. The user thenhas the option to commit these modifications, or simply discard them. Essentially, our approach provides "play" and"rewind" buttons for running untrusted software. Key benefits of our approach are that it requires no changes to theuntrusted programs (to be isolated) or the underlying operating system; it cannot be subverted by malicious programs;and it achieves these benefits with acceptable runtime overheads. We describe a prototype implementation of this system for Linux called Alcatraz and discuss its performanceand effectiveness.
►bibtex
PDF DOI: 10.1109/CSAC.2003.1254323
@inproceedings{Liang:acsac03,
abstract = {In this paper, we present a new approach for safe execution of untrusted programs by isolating their effects fromthe rest of the system. Isolation is achieved by interceptingfile operations made by untrusted processes, and redirecting any change operations to a "modification cachel" thatis invisible to other processes in the system. File read operations performed by the untrusted process are also correspondingly modified, so that the process has a consistentview of system state that incorporates the contents of the filesystem as well as the modification cache. On termination ofthe untrusted process, its user is presented with a concisesummary of the files modified by the process. Additionally,the user can inspect these files using various software utilities (e.g., helper applications to view multimedia files) todetermine if the modifications are acceptable. The user thenhas the option to commit these modifications, or simply discard them. Essentially, our approach provides "play" and"rewind" buttons for running untrusted software. Key benefits of our approach are that it requires no changes to theuntrusted programs (to be isolated) or the underlying operating system; it cannot be subverted by malicious programs;and it achieves these benefits with acceptable runtime overheads. We describe a prototype implementation of this system for Linux called Alcatraz and discuss its performanceand effectiveness.},
address = {Las Vegas, NV},
author = {Zhenkai Liang and V. N. Venkatakrishnan and R. Sekar},
booktitle = {19th Annual Computer Security Applications Conference {(ACSAC} 2003), 8-12 December 2003, Las Vegas, NV, {USA}},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 22:10:26 -0600},
doi = {10.1109/CSAC.2003.1254323},
keywords = {runtime monitoring;sandboxing;isolated execution;Software installation;},
month = {Dec},
note = {Best Paper Award!},
pages = {182--191},
publisher = {{IEEE} Computer Society},
series = {SOSP'03},
title = {Isolated Program Execution: An Application Transparent Approach for Executing Untrusted Programs},
url = {https://doi.org/10.1109/CSAC.2003.1254323},
year = 2003,
bdsk-url-1 = {https://dblp.org/rec/conf/acsac/LiangVS03},
bdsk-url-2 = {https://doi.org/10.1109/CSAC.2003.1254323},
}
|
|
[6]
|
Model-carrying code: a practical approach for safe execution of untrusted applications
(R. Sekar, V. N. Venkatakrishnan, Samik Basu, Sandeep Bhatkar, Daniel C. DuVarney)
Proceedings of the 19th ACM Symposium on Operating Systems Principles 2003, SOSP 2003, Bolton Landing, NY, USA, October 19-22, 2003 (SOSP'03), pp. 15–28 Acceptance rate: 17%
Abstract
This paper presents a new approach called model-carrying code (MCC) for safe execution of untrusted code. At the heart of MCC is the idea that untrusted code comes equipped with a concise high-level model of its security-relevant behavior. This model helps bridge the gap between high-level security policies and low-level binary code, thereby enabling analyses which would otherwise be impractical. For instance, users can use a fully automated verification procedure to determine if the code satisfies their security policies. Alternatively, an automated procedure can sift through a catalog of acceptable policies to identify one that is compatible with the model. Once a suitable policy is selected, MCC guarantees that the policy will not be violated by the code. Unlike previous approaches, the MCC framework enables code producers and consumers to collaborate in order to achieve safety. Moreover, it provides support for policy selection as well as enforcement. Finally, MCC makes no assumptions regarding the inherent risks associated with untrusted code. It simply provides the tools that enable a consumer to make informed decisions about the risk that he/she is willing to tolerate so as to benefit from the functionality offered by an untrusted application.
►bibtex
PDF DOI: 10.1145/945445.945448
@inproceedings{Sekar:sosp03,
abstract = {This paper presents a new approach called model-carrying code (MCC) for safe execution of untrusted code. At the heart of MCC is the idea that untrusted code comes equipped with a concise high-level model of its security-relevant behavior. This model helps bridge the gap between high-level security policies and low-level binary code, thereby enabling analyses which would otherwise be impractical. For instance, users can use a fully automated verification procedure to determine if the code satisfies their security policies. Alternatively, an automated procedure can sift through a catalog of acceptable policies to identify one that is compatible with the model. Once a suitable policy is selected, MCC guarantees that the policy will not be violated by the code. Unlike previous approaches, the MCC framework enables code producers and consumers to collaborate in order to achieve safety. Moreover, it provides support for policy selection as well as enforcement. Finally, MCC makes no assumptions regarding the inherent risks associated with untrusted code. It simply provides the tools that enable a consumer to make informed decisions about the risk that he/she is willing to tolerate so as to benefit from the functionality offered by an untrusted application.},
address = {Bolton Landing, NY, USA},
annote = {Acceptance rate: 17%},
author = {R. Sekar and V. N. Venkatakrishnan and Samik Basu and Sandeep Bhatkar and Daniel C. DuVarney},
booktitle = {Proceedings of the 19th {ACM} Symposium on Operating Systems Principles 2003, {SOSP} 2003, Bolton Landing, NY, USA, October 19-22, 2003},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-15 09:59:26 -0600},
doi = {10.1145/945445.945448},
keywords = {mobile code;verification;model checking;system call monitoring;Security policies;formal methods},
month = {Oct},
pages = {15--28},
publisher = {{ACM}},
series = {SOSP'03},
title = {Model-carrying code: a practical approach for safe execution of untrusted applications},
url = {https://doi.org/10.1145/945445.945448},
year = 2003,
bdsk-url-1 = {https://dblp.org/rec/conf/sosp/SekarVBBD03},
bdsk-url-2 = {https://doi.org/10.1145/945445.945448},
}
|
|
[5]
|
SELF: a transparent security extension for ELF binaries
(Daniel C. DuVarney, V. N. Venkatakrishnan, Sandeep Bhatkar)
Proceedings of the New Security Paradigms Workshop 2003, August 18-21, 2002, Ascona, Switzerland (NSPW'03), pp. 29–38 Acceptance rate: 13 out of 43 submissions: 30%
Abstract
The ability to analyze and modify binaries is often very useful from a security viewpoint. Security operations one would like to perform on binaries include the ability to extract models of program behavior and insert inline reference monitors. Unfortunately, the existing manner in which binary code is packaged prevents even the simplest of analyses, such as distinguishing code from data, from succeeding 100 percent of the time. In this paper, we propose SELF, a security-enhanced ELF (Executable and Linking Format), which is simply ELF with an extra section added. The extra section contains information about (among other things) the address, size, and alignment requirements of each code and static data item in the program. This information is somewhat similar to traditional debugging information, but contains additional information specifically needed for binary analysis that debugging information lacks. It is also smaller, compatible with optimization, and less likely to facilitate reverse engineering, which we believe makes it practical for use with commercial software products. SELF approach has three key benefits. First, the information for the extra section is easy for compilers to provide, so little work is required on behalf of compiler vendors. Second, the extra section is ignored by default, so SELF binaries will run perfectly on all systems, including ones not interested in leveraging the extra information. Third, the extra section provides sufficient information to perform many security-related operations on the binary code. We believe SELF to be a practical approach, allowing many security analyses to be performed while not requiring major changes to the existing compiler infrastructure. An application example of the utility of SELF to perform address obfuscation (in which the addresses of all code and data items are randomized to defeat memory-error exploits) is presented.
►bibtex
PDF DOI: 10.1145/986655.986661
@inproceedings{DuVarney:nspw03,
abstract = {The ability to analyze and modify binaries is often very useful from a security viewpoint. Security operations one would like to perform on binaries include the ability to extract models of program behavior and insert inline reference monitors. Unfortunately, the existing manner in which binary code is packaged prevents even the simplest of analyses, such as distinguishing code from data, from succeeding 100 percent of the time. In this paper, we propose SELF, a security-enhanced ELF (Executable and Linking Format), which is simply ELF with an extra section added. The extra section contains information about (among other things) the address, size, and alignment requirements of each code and static data item in the program. This information is somewhat similar to traditional debugging information, but contains additional information specifically needed for binary analysis that debugging information lacks. It is also smaller, compatible with optimization, and less likely to facilitate reverse engineering, which we believe makes it practical for use with commercial software products. SELF approach has three key benefits. First, the information for the extra section is easy for compilers to provide, so little work is required on behalf of compiler vendors. Second, the extra section is ignored by default, so SELF binaries will run perfectly on all systems, including ones not interested in leveraging the extra information. Third, the extra section provides sufficient information to perform many security-related operations on the binary code. We believe SELF to be a practical approach, allowing many security analyses to be performed while not requiring major changes to the existing compiler infrastructure. An application example of the utility of SELF to perform address obfuscation (in which the addresses of all code and data items are randomized to defeat memory-error exploits) is presented.},
address = {Ascona, Italy},
annote = {Acceptance rate: 13 out of 43 submissions: 30%},
author = {Daniel C. DuVarney and V. N. Venkatakrishnan and Sandeep Bhatkar},
booktitle = {Proceedings of the New Security Paradigms Workshop 2003, August 18-21, 2002, Ascona, Switzerland},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-14 17:22:56 -0600},
doi = {10.1145/986655.986661},
keywords = {binary analysis;static analysis;compilers},
month = {Aug},
pages = {29--38},
publisher = {{ACM}},
series = {NSPW'03},
title = {{SELF:} a transparent security extension for {ELF} binaries},
url = {https://doi.org/10.1145/986655.986661},
year = 2003,
bdsk-url-1 = {https://dblp.org/rec/conf/nspw/DuVarneyVB03},
bdsk-url-2 = {https://doi.org/10.1145/986655.986661},
}
|
|
[4]
|
An Approach for Secure Software Installation
(V. N. Venkatakrishnan, R. Sekar, T. Kamat, S. Tsipa, Zhenkai Liang)
Proceedings of the 16th Conference on Systems Administration (LISA 2002), Philadelphia, PA, USA, November 3-8, 2002 (LISA'02), pp. 219–226
Abstract
We present an approach that addresses the problem of securing software configurations from the security-relevant actions of poorly built/faulty installation packages. Our approach is based on a policy-based control of the package manager's actions and is customizable for site-specific policies. We discuss an implementation of this approach in the context of the Linux operating system for the Red Hat Package manager (RPM).
►bibtex
PDF DOI: https://dl.acm.org/doi/10.5555/1050517.1050544
@inproceedings{Venkatakrishnan:lisa02,
abstract = {We present an approach that addresses the problem of securing software configurations from the security-relevant actions of poorly built/faulty installation packages. Our approach is based on a policy-based control of the package manager's actions and is customizable for site-specific policies. We discuss an implementation of this approach in the context of the Linux operating system for the Red Hat Package manager (RPM).},
address = {Philadelphia, PA, USA},
author = {V. N. Venkatakrishnan and R. Sekar and T. Kamat and S. Tsipa and Zhenkai Liang},
booktitle = {Proceedings of the 16th Conference on Systems Administration {(LISA} 2002), Philadelphia, PA, USA, November 3-8, 2002},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-17 20:57:09 -0600},
doi = {https://dl.acm.org/doi/10.5555/1050517.1050544},
keywords = {Software installation; Security policies;runtime monitoring},
month = {Nov},
pages = {219--226},
publisher = {{USENIX}},
series = {LISA'02},
title = {An Approach for Secure Software Installation},
url = {https://dl.acm.org/doi/10.5555/1050517.1050544},
year = 2002,
bdsk-url-1 = {https://dblp.org/rec/conf/lisa/VenkatakrishnanSKTL02},
bdsk-url-2 = {http://www.usenix.org/publications/library/proceedings/lisa02/tech/venkatakrishnan.html},
bdsk-url-3 = {https://dl.acm.org/doi/10.5555/1050517.1050544},
bdsk-url-4 = {https://doi.org/10.5555/1050517.1050544},
}
|
|
[3]
|
Empowering mobile code using expressive security policies
(V. N. Venkatakrishnan, Ram Peri, R. Sekar)
Proceedings of the 2002 Workshop on New Security Paradigms, Virginia Beach, VA, USA, September 23-26, 2002 (NSPW'02), pp. 61–68
Abstract
Existing approaches for mobile code security tend to take a conservative view that mobile code is inherently risky, and hence focus on confining it. Such confinement is usually achieved using access control policies that restrict mobile code from taking any action that can potentially be used to harm the host system. While such policies can be helpful in keeping "bad applets" in check, they preclude a large number of useful applets. We therefore take an alternative view of mobile code security, one that is focused on empowering mobile code rather than disabling it. We propose an approach wherein highly expressive security policies provide the basis for such empowerment, while greatly mitigating the risks posed to the host system by such code. Our policies are represented as extended finite state automata, (a generalization of the finite-state automata to permit the use of variables) that can enforce these policies efficiently. We have built a prototype implementation of our approach for Java. Our implementation is based on rewriting Java byte code so that security-relevant events are intercepted and forwarded to the policy enforcement automata before they are executed. Early experimental results indicate that such expressive, enabling policies can be supported with low overheads.
►bibtex
PDF DOI: 10.1145/844102.844113
@inproceedings{Venkatakrishnan:nspw02,
abstract = {Existing approaches for mobile code security tend to take a conservative view that mobile code is inherently risky, and hence focus on confining it. Such confinement is usually achieved using access control policies that restrict mobile code from taking any action that can potentially be used to harm the host system. While such policies can be helpful in keeping "bad applets" in check, they preclude a large number of useful applets. We therefore take an alternative view of mobile code security, one that is focused on empowering mobile code rather than disabling it. We propose an approach wherein highly expressive security policies provide the basis for such empowerment, while greatly mitigating the risks posed to the host system by such code. Our policies are represented as extended finite state automata, (a generalization of the finite-state automata to permit the use of variables) that can enforce these policies efficiently. We have built a prototype implementation of our approach for Java. Our implementation is based on rewriting Java byte code so that security-relevant events are intercepted and forwarded to the policy enforcement automata before they are executed. Early experimental results indicate that such expressive, enabling policies can be supported with low overheads.},
address = {Virginia Beach, VA, USA},
author = {V. N. Venkatakrishnan and Ram Peri and R. Sekar},
booktitle = {Proceedings of the 2002 Workshop on New Security Paradigms, Virginia Beach, VA, USA, September 23-26, 2002},
date-added = {2026-02-13 19:50:58 -0600},
date-modified = {2026-02-14 10:04:05 -0600},
doi = {10.1145/844102.844113},
editor = {Cristina Serban and Carla Marceau and Simon N. Foley},
keywords = {mobile code;Security policies;runtime monitoring},
month = {Sep},
pages = {61--68},
publisher = {{ACM}},
series = {NSPW'02},
title = {Empowering mobile code using expressive security policies},
url = {https://doi.org/10.1145/844102.844113},
year = 2002,
bdsk-url-1 = {https://dblp.org/rec/conf/nspw/VenkatakrishnanPS02},
bdsk-url-2 = {https://doi.org/10.1145/844102.844113},
}
|