Responsible, Informative, and Secure Computing (RISC)

1 / 2
RISC lab is celebraing the end of Spring and start of Summer 2023.
2 / 2
Five Members of Risc Lab (Alan, Salvador, Saeid, Verya, and Ranit) in Summer 2023.

JOIN US!

We welcome enthusiastic students at any level to our lab. Please contact RISC to inquire about opportunities in the lab. Currently, we have ongoing projects at the intersection of software engineering, machine learning, and cybersecurity:

  • Secure, Efficient, and Reliable Machine Learning
  • Testing for Discrimination in Algorithmic Decision-Support Systems
  • Software Engineering for Legal Reasoning and Auditing
  • Fuzz Testing and Causal Debugging

Find Us

The RISC lab is located at the third floor of Chemistry and Computer Science Building (CCSB 3.1202D). Please send out any queries to saeid{at}utep.edu

Students

PhD Students

  • Verya Monjezi (UTEP, advisor) Spring 2022-now
  • Ranit Debnath Akash (UTEP, advisor) Summer 2023-Now
  • Sina Khiabani (UTEP, advisor) Fall 2023-Now
  • Ashish Kumar (Penn State, co-advised with Dr. Tan) Spring 2021-Now

MS Students

  • Vishnu Dasu (Penn State, MS Thesis Mentor with Dr. Tan) Spring 2023-Now
  • Varsha Dewangan (CU Boulder, co-advised with Dr. Trivedi) Fall 2023-Now
  • Hemanth Rao Karade Nagendra (CU Boulder, co-advised with Dr. Trivedi) Fall 2023
  • Alejandor Rodriguez (UTEP, Mentor) Summer 2023
  • Hector Reyes (UTEP, Mentor) Spring 2021-Fall 2022
  • Tatheer Zahra (Penn State, co-advised with Dr. Tan, now Software Engineer @ Hewlett Packard Enterprise) Spring 2023-Summer 2023

Undergraduate Students

  • Salvador Robles (UTEP, co-advised with Dr. Kreinovich) Summer 2023-Now
  • Alan Ochoa (UTEP) Summer 2023
  • Normen Yu (Penn State, BS Thesis Mentor with Dr. Tan) Spring 2021-Summer 2023
  • Hector A Reyes (UTEP, co-advised with Dr. Kreinovich) Spring 2021-Fall 2022
  • Victor Juarez (UTEP, CAHSI REU, now Software Engineer @ Bloomberg) Spring 2022-Fall 2022

High School Students

  • Daniyal Dawood (Coronado HS, now UG student at UT Austin) Summer 2022

Overview of Current Projects

Accountable and Fair Computing
Discriminatory software is a critical issue in the automated decision-support systems. For example, Parole decision-making software was found to harm black and Hispanic defendants by falsely predicting a higher risk of re-offending for them than non-Hispanic white defendants; Amazon's hiring algorithm disproportionately rejected more female applicants than male applicants; and IRS audited black taxpayers at much higher rates than other racial groups. Thus, it is of paramount importance to detect, explain, and mitigate software discrimination in order to improve trustworthiness and minimize their harm to vulnerable communities. The central research goal is to devise principled testing and debugging methods to improve fairness in data-driven and rule-based software.
See the research papers: ICSE'22, ICSE'23, and ICSE-SEIS'23. Also, see Presentation in ICSE'22.

This research has been funded by NSF DASS, Assessing Accountability of Tax Preparation Software Systems in collaboration with Dr. Krystia Reed (UTEP, Pyschology) and Ashutosh Trivedi (CU Boulder, CS): $750k for UTEP+CU Boulder and $530,000 for UTEP.

Efficient Machine Learning Systems: Performance Bugs and Denial-of-Service Issues
Performance bgus, programming errors that cause significant performance degradations, lead to low throughput, poor user experiences, and security vulnerabilities. Building a tool that can detect and explain performance bugs is difficult since they arise from non-functional aspects of programs and often require hyper-trace analysis to identify root causes. We have developed a set of tools and techniques to discover, interpret, and mitigate performance bugs and denial-of-service vulnerabilities in machine learning libraries and Java web applications. We synthesized approaches from gray-box fuzzing, dynamic program analysis, machine learning inference, and verification to build scalable and useful program analysis tools. These tools have been successfully found multiple performance bugs in popular ML libraries such as scikit-learn. See ISSTA'20, AAAI'18, and TACAS'17 papers for more information.

This research has been funded by NSF SaTC program, Detecting and Localizing Non-Functional Vulnerabilities in Machine Learning Libraries in collaboration with Dr. Tan ($600k for UTEP+Penn State; $353,357 for UTEP).

Detecting Security Vulnerabilities via ML and DNN
Confidentiality and privacy are critical requirements of software. However, software might contaitn vulnerabilities that can be exploited by attackers to leak sensitive information. The key idea is to devise novel fuzzing techniques that can automatically generate test cases that can be used to detect intrusion in networks and privacy vulnerbailities in code. The developed tools have identified multiple side-channel vulnerabilities in critical Java frameworks such as OpenJDK and Eclipse Jetty.
See TNSM'22, ISSTA'21, NDSS'20, and CAV'19.