Skip to main content Skip to navigation

WPCCS'23 Winter Schedule

8:30-9:30 Registration
9:30-10:00

Guest Talk: Dr Ayşe Sunar (MSB0.07)

Chair: Zhihao Dai

10:00-10:15 Break
 

Computational Biology
MSB0.07

Chair: Neha Rino

10:15-10:30

Integrating Social Network Analysis and Deep Learning for Improved Prediction of Molecular Pathways in Colorectal Cancer
Neda Zamanitajeddin

   
10:30-10:45

Decoding Lung Adenocarcinoma: A Generalizable Growth Patterns Classification Model Using H&E Whole Slide Histology Images
Arwa Alrubaian

   
10:45-11:00

Spatio-Temporal application of Heterogeneous Graph Neural Networks on cell surface data
Edward Offord

   
11:00-11:15 Coffee Break
 

Theory and Foundations
MSB0.08

Chair: Aditya Prakash

Machine Learning
MSB0.07

Chair: Zhihao Dai

11:15-11:30

Simple Programs with NP-hard Termination
Henry Sinclair-Banks

Source Code Plagiairsm Detection with Transformers
Fahad Ebrahim

 
11:30-11:45

Constant-depth circuits vs. monotone circuits
Bruno Pasqualotto Cavalar

Reconstructing Training Data from Trained Neural Networks
Andrei Codreanu

 
11:45-12:00

Fault-Tolerant Kernel Oracles
Peter Strulo (Cancelled)

Enhancing Personalization with Graph Neural Networks in Federated Learning
Yan Qian

 
12:00-12:15

Zero Knowledge Proofs For All Of DP
Ari Biswas

Distributionally Robust Routing Problems
Patrick O'Hara

 
12:15-12:30  

Efficient low-bit quantization for Graph Neural Networks
Rustam Guliyev (Online)

 
12:30-13:30 Lunch (MSB Atrium)
  PGCert Transferable Skills in Science (MSB Atrium)
 

Poster Sessions (MSB Atrium)

Chair: Aditya Prakash

  Computational Biology Posters
MSB Atrium
Artificial Intelligence Posters
MSB Atrium
Computer Security Posters
MSB Atrium
 

Multi-objective optimization for high-definition transcranial electrical stimulation of the human brain
Mo Wang

Towards Real-Time Sign Language Recognition and Translation on Edge Devices
Shiwei Gan

Revealing Ongoing Sensor Attacks in Industrial Control System via Setpoint Modification
Zhihao Dai

   

Accurate segmentation of nuclear instances using a double-stage neural network
Kesi Xu

 
   

NLP Approaches to Detect Threatening and Abusive Language Used in Communications With Victims
Sahrish Khan (Cancelled)

 
13:30-14:00

Guest Talk: Dr Amir Shirian (MSB0.07)

Chair: Beniel Thileepan

14:00-14:15 Break
 

Computational Biology
MSB0.08

Chair: Patrick O'Hara

Artificial Intelligence
MSB0.07

Chair: Daniela Valdes

Theory and Foundations
MSB0.01

Chair: Finnbar Keating

14:15-14:30

Dynamics of Brain Lateralization and Genetic Underpinnings during Chinese Natural Speech Processing: Preferential Occurrence of Logic-Related Patterns in Males and Emotion-Related Patterns in Females
Ruohan Zhang (Online)

Clairvoyance Via Network Statistics
Rosco Hunter

Distance-based robustness for Signal Temporal Logic
Neha Rino

14:30-14:45

Robust and Efficient Detection of Mitotic Figures​
Mostafa Jahanifar

Explaining Diffusion Models
Bhushan Atote

Parity Index Hierarchy Collapses in Semantically-Deterministic Automata
Aditya Prakash

14:45-15:00  

Enlightenment Improves Performance
Minghao Xu

 
15:00-15:15  

First-view pedestrian trajectory prediction based on diffusion modeling
Boyang Fu

 
15:15-15:30 Coffee Break
 

High-Performance Computing
MSB0.08

Chair: Neha Rino

Artificial Intelligence
MSB0.07

Chair: Ashley Au

Artificial Intelligence
MSB0.01

Chair: Patrick O'Hara

15:30-15:45

OP-PIC – An Unstructured Mesh Particle in Cell Domain Specific Language
Zaman Lantra

Developing a Benchmark for Evaluation Hallucination in Multimodal Large Language Models
Kaiwen Zuo

Causal Optimal Transport of Abstractions
Yorgos Felekis

15:45-16:00

Automatic Synthesis of Structured-mesh Solvers on FPGAs using High-Level Abstractions
Beniel Thileepan

A Closer Look at Data Overlap in the Difficulty of Unlearning
Kairan Zhao (Cancelled)

Stepping into the boardroom: A novel AI-enabled method for identifying collective leadership
Daniela Valdes

16:00-16:15

GNN Embedding for taskflow allocation in MEC environment
Xuanyu Liu (Cancelled)

FLAIM: AIM-based Synthetic Data Generation in the Federated setting
Samuel Maddock

Betting on what is neither verifiable nor falsifiable.
Abhimanyu Pallavi Sudhir

16:15-16:30  

NLP-based Social Media Cyberthreat Intelligence Framework for early prediction of Zero-day Attacks
Majed Albarrak

Learning stable allocation in convex cooperative game under uncertainty
Nam Tran (Cancelled)

16:30-16:45

Closing including Prize Announcement (MSB0.07)

Chair: Beniel Thileepan

Guests

The potential of participant text comments in MOOCs (top)

by Dr Ayşe Sunar

Participants' contributions in MOOCs, especially written comments in text form, are crucial for researchers and course designers to understand participants' behaviour. Given the large scale of participation, it is a challenge to model participants' behaviour patterns and identify at-risk participants. However, this challenge is also an opportunity to apply large language models to automatically and intelligently identify student needs and behaviour so that pedagogical interventions can be implemented. I will present some interesting results of my past research and talk about the potential applications of large language models in MOOCs.

Generative AI: The What, Why and How (top)

by Dr Amir Shirian

Generative artificial intelligence is reinventing what computers can do - from augmenting human creativity to automating rote tasks. Unlike previous AI systems focused solely on analysis, generative AI allows computers to produce novel, original artifacts and content. Models can generate images, text, code, designs, molecules, and more with increasing coherence and quality. This paradigm shift opens doors for innovations in how we create, discover, and communicate. Generative models like diffusion models and large language models are trained on vast datasets to understand and replicate patterns. Sampling from the learned distribution, they can extrapolate new outputs. Recent advances allow control and conditioning for specific purposes. Applications span industries, from artists leveraging tools like DALL-E for imagination augmentation to startups using generative chemistry to accelerate drug formulation. Despite the numerous benefits, potential concerns range from biases incorporated in data to misuse of forged media to compute's environmental impact. Managing these responsibly is crucial as generative AI looks to significantly transform society. Overall, this technology represents a versatile engine for automating tasks, assisting people, and realizing novel ideas.

Artificial Intelligence

Clairvoyance Via Network Statistics (top)

by Rosco Hunter

For every piece of input data, neural networks have a corresponding gradient (learning) and activation pattern (representation). By studying the statistics of the gradients and activation patterns produced by a network at initialisation we can gain insights into its ability to learn and represent information after training. In my paper "Exploiting Network Compressibility and Topology in Zero-Cost NAS" we used these network statistics to produce state of the art, zero cost performance prediction.

Explaining Diffusion Models (top)

by Bhushan Atote

With the growing popularity and practicality of generative models, diffusion models have demonstrated remarkable capabilities in generating novel samples. However, concerns arise due to the misuse of such generative models, particularly in the creation of deepfake images. This raises questions regarding the authenticity and provenance of generated content. To tackle this issue, we employ the principles of explainable AI, which seek to reveal the attributes from the training data that give rise to the production of synthetic content. By carefully examining the noise present in noisified inputs using neural networks, our goal is to identify the precise image elements from training samples that influence the creation of the ultimate generated image.

Enlightenment Improves Performance (top)

by Minghao Xu

All training process suffers from imbalanced data samples, especially during the beginning stage. Although there are heuristic methods that help improve training performance, such as dropout, and learning rate warmup, they are architecture-related and require adjusting hyperparameters accordingly. We, otherwise, want to solve this from the data side. We would like to utilise dataset surrogates to do the warmup process. By introducing this architecture-agnostic plugin and focusing on the vital initial stage, we would overcome the underlying batch es imbalance and result in better training performance.

First-view pedestrian trajectory prediction based on diffusion modeling (top)

by Boyang Fu

In this study, we explore a first-person view pedestrian trajectory prediction method based on diffusion model. The method aims to address the problem of predicting the pedestrian's position in future frames from a continuous first-person video stream captured by a wearable camera. We propose a novel architecture that incorporates the scale, pose, and self-motion information of pedestrians to achieve accurate prediction of their future locations. Our model employs a diffusion process to simulate the uncertainty of pedestrian motion and learns the underlying patterns of pedestrian motion through the network.

Developing a Benchmark for Evaluation Hallucination in Multimodal Large Language Models (top)

by Kaiwen Zuo

Recent AI research has focused on the development of multimodal Large Language Models (LLMs) that combine language, vision, and reasoning to interpret complex data inputs, as highlighted by Zhao (2023).These advanced LLMs integrate various modalities to analyze and correlate information, thereby enhancing problem-solving and data analysis capabilities within the field of AI. While these models have shown impressive capabilities, they are prone to generating plausible yet incorrect responses, known as "hallucination of intelligence." This phenomenon can lead to misinformation and unreliable outputs, particularly in vision-language models where there is a risk of generating text that inaccurately describes visual content. Our work aims to develop a comprehensive multimodal benchmark to assess the tendency for hallucination in vision-language models, identify evaluation metrics, and enhance the reliability of these models.

A Closer Look at Data Overlap in the Difficulty of Unlearning (top)

by Kairan Zhao

Machine unlearning is an emerging field in machine learning that involves removing the influence of specific data from trained models to ensure privacy and meet evolving regulatory demands. This project delves into how overlaps between 'forget set' (data to be removed) and 'retain set' (data to be kept) impact the unlearning challenge. This impact is scrutinized by probing the model's parameter and embedding spaces. The former is analyzed using the Fisher Information Matrix to discern parameter significance, while the latter is assessed for disentanglement, reflecting the distinctiveness of the model's data representation. The difficulty of unlearning is evaluated in terms of model utility and privacy. The research reveals an interaction between these overlaps and unlearning difficulty, and also explores different unlearning algorithms and their performance in various scenarios, offering valuable perspectives for improving unlearning processes.

FLAIM: AIM-based Synthetic Data Generation in the Federated setting (top)

by Samuel Maddock

Enabling collaborative data sharing is crucial for organisations. Synthetic data is one solution, producing artificial data that mirrors the statistical properties of private data. While numerous techniques have been devised under differential privacy, they predominantly assume data is centralised. However, data is often distributed across multiple clients in a federated manner. In this work, we initiate the study of federated tabular data generation. Building upon a central approach, AIM, we present DistAIM and FLAIM. We show how to distribute AIM, extending a recent approach based on secure multi-party computation which necessitates additional overhead, making it less suited to federated scenarios. We then demonstrate that naively federating AIM leads to a substantial degradation in utility under heterogeneity. To mitigate both issues, we propose an approach that maintains a private proxy of heterogeneity. We perform an extensive comparison across benchmarks to show we can improve utility while reducing overhead.

NLP-based Social Media Cyberthreat Intelligence Framework for early prediction of Zero-day Attacks (top)

by Majed Albarrak

Zero-day attacks are a type of cybersecurity attack that searches for unknown software or hardware holes or weaknesses to use for attacking the victim. They, zero-day attacks, pose a significant threat to cybersecurity, exploiting undiscovered vulnerabilities or unknown adversary methods with high success rates. The project discusses key challenges in leveraging AI and NLP techniques for real-time CTI on social media platforms. Firstly, detected urgent cybersecurity- related events must be prioritized for swift response, but identifying urgency from textual content remains challenging. Despite advances in text analysis using transformers, conveying urgency and criticality accurately in cybersecurity text remains elusive. Furthermore, turning the textual attitude into urgent, intense, or negative tones can be caused by cybersecurity events. However, finding causality patterns for these changes is another challenge with a lack of text urgency measurements and overlapping cybersecurity incidents.

Causal Optimal Transport of Abstractions (top)

by Yorgos Felekis

Complex systems can encompass a range of fields, from physical systems like particle behaviour to social systems like decision-making. The challenge with studying complex systems is their level of detail regarding the dependencies and interactions within their parts or with its environment. Structural Causal Models (SCMs) are commonly used to model these systems by representing the cause-and-effect relationships between variables and predicting the consequences of changes/interventions to them. To simplify their analysis, we create abstractions which are simplified high-level versions of them but retain the crucial information. Our goal is to connect these abstractions back to the original, more detailed models, allowing us to jointly reason about evidence across models with multiple levels of granularity. To achieve this, we introduce a novel approach that leverages Optimal Transport theory to align the probability distributions resulting from interventions on the Structural Causal Models of both the detailed and abstracted systems.

Stepping into the boardroom: A novel AI-enabled method for identifying collective leadership (top)

by Daniela Valdes

NHS Leaders must work together to solve the complex problems of the service. Collective leadership (CL- or in broad terms the leadership within groups) is difficult to define, and identify empirical level. This means NHS leaders are unable to assess their joint effectiveness. In this talk, we undertake greenfield research at the intersection of Organisational Research and NLP, building from mathematical/sociolinguistic frameworks to define an NLP task for CL identification. We (i) propose a formal definition of CL within the corpus of board reports of an NHS hospital to (ii) motivate a text-based method for identifying CL by combining established NLP tasks with the latest AI-prompting techniques applied to board meeting data. Our research is the first to set up a clear NLP problem on collective decision-making with associated data and methods for CL, bringing together two seemingly separate disciplines (NLP and Organisational Research).

Betting on what is neither verifiable nor falsifiable. (top)

by Abhimanyu Pallavi Sudhir

Philosophers like to claim that only falsifiable sentences are scientifically valid. Falsifiable sentences are universally quantified ones ("All swans are white"), while Verifiable sentences are existentially quantified ("There exists a white swan"). However, there is an entire class of non-verifiable, non-falsifiable sentences that are taken seriously in science and mathematics: for starters, all the sentences higher up the arithmetical hierarchy. One way to give "meaning" to a sentence is to have betting markets for it, where the price of the sentence reflects your subjective probability. The first part of my work is in setting up betting markets for sentences up the arithmetical hierarchy. The second part is in regarding the class of sentences that are only meaningful "in the latent space", i.e. which are "subjective", and designing betting markets for these. This is a much more interesting problem, and underlies a close apparent connection between markets and AI.

Learning stable allocation in convex cooperative game under uncertainty (top)

by Nam Tran

Collaboration among multiple agents is essential in many environments. However, when there is uncertainty, collaboration can be difficult. Therefore, to allocate rewards to promote future collaboration, a process of trial and error is necessary. That is, agents can form temporary agreements based on their current knowledge and update it as they learn more through collaboration. However, the presence of learning among agents can pose two typical challenges. First, this trial-and-error process can be expensive in real-world scenarios. Second, learning and collaboration can complicate the dynamic of the system, making it challenging to reach a stable agreement. In this work, we will present an approach to address those two problems in the context of convex cooperative game. Specifically, we introduce a framework for learning the expected stable allocation of convex games under uncertainty and provide a high-probability bound on the number of samples required.

Artificial Intelligence Posters

Towards Real-Time Sign Language Recognition and Translation on Edge Devices (top)

by Shiwei Gan

To provide instant communication for hearing-impaired people, it is essential to achieve real-time sign language processing anytime anywhere. Therefore, in this paper, we propose a Regionaware Temporal Graph based neural Network (RTG-Net), aiming to achieve real-time Sign Language Recognition (SLR) and Translation (SLT) on edge devices. To reduce the computation overhead, we first construct a shallow graph convolution network to reduce model size by decreasing model depth. Besides, we apply structural re-parameterization to fuse the convolutional layer, batch normalization layer and all branches to simplify model complexity by reducing model width. To achieve the high performance in sign language processing as well, we extract key regions based on keypoints in skeleton from each frame, and design a regionaware temporal graph to combine key regions and full frame for feature representation. In RTG-Net, we design a multi-stage training strategy to optimize keypoint selection, SLR and SLT step by step.

Accurate segmentation of nuclear instances using a double-stage neural network (top)

by Kesi Xu

Automatic nuclear instance segmentation is a crucial task in computational pathology as this information is required for extracting cell-based features in down-stream analysis. However, instance segmentation is a challenging task due to the nature of histology images which show large variations and irregularities in nuclei appearances and arrangements. Various deep learning-based methods have tried to tackle these challenges, however, most of these methods fail to segment the nuclei instances in crowded scenes accurately, or they are not fast enough for practical usage. In this paper, we propose a double-stage neural network for nuclear instance segmentation which leverages the power of an interactive model, NuClick, for accurate instance segmentation by replacing the user input with a nuclei detection module, YOLOv5. We optimized the proposed method to be lightweight and fast and show that it can achieve promising results when tested on the largest publicly available nuclei dataset.

NLP Approaches to Detect Threatening and Abusive Language Used in Communications With Victims (top)

by Sahrish Khan

In the age of expanding online platforms, the prevalence of offensive language, particularly sexism, is a significant concern. Tackling online sexism is crucial in natural language processing (NLP). This study highlights our contributions to SemEval 2023 Task 10: Explainable Detection of Online Sexism, which includes binary sexism classification (Task A), categorization into four categories (Task B), and fine-grained classification into 11 vectors (Task C). We employ LLM models like T5, LLaMa, Roberta, DebertaV3, and Falen T5, exploring various model variations and techniques like pre-training and context learning. Our focus is on model interpretability, allowing us to provide explanations for flagged sexist content. We also delve into identifying offensive personas and root causes of sexism, aiming to improve detection methods. Furthermore, we aim to extend our research to address broader issues like domestic violence and hate speech, contributing to a safer and more inclusive online environment.

Computational Biology

Integrating Social Network Analysis and Deep Learning for Improved Prediction of Molecular Pathways in Colorectal Cancer (top)

by Neda Zamanitajeddin

Conventional techniques for identifying molecular pathways, genetic subtypes, and mutations associated with CRC (which are crucial for precision medicine) are costly and time-consuming and tissue destructive. Most recent deep learning methods proposed for this task lack interpretability. We present a novel approach to identify key mutations and molecular pathways in CRC by incorporating cell interaction information. Our method utilizes cell graphs, Social Network Analysis (SNA) measures, and deep learning. It significantly improves the prediction of key mutations (CIN, HM, TP53, BRAF, MSI) compared to state-of-the-art methods, achieving 2.4-4% and 7-8.8% improvement in AUROC and AUPRC on average. Additionally, we excel in MSI prediction (99% AUROC, 98% AUPRC) in an external dataset. This approach offers insights into the relationship between cell interactions, molecular pathways, and mutations, providing interpretability and computational efficiency. Our study demonstrates the potential of SNA-based features to enhance deep learning in CRC and other biological image analysis tasks.

Decoding Lung Adenocarcinoma: A Generalizable Growth Patterns Classification Model Using H&E Whole Slide Histology Images (top)

by Arwa Alrubaian

Lung adenocarcinoma (LUAD) is a morphologically heterogeneous disease, characterized by five primary histologic growth patterns. The quantity of these patterns within a tumor tissue is related to tumor behavior and has a significant impact on patient prognosis. The automation of growth pattern classification using machine learning would be a valuable addition to pathology, enhancing the precision and objectivity of such task while alleviating its labor-intensive and time-consuming nature. In this research, we propose a novel and expeditious machine learning algorithm that can accurately classify a tissue patch into one of the five patterns or categorize it as non-tumor, with an AUCROC of 0.95. Our model’s strength lies in its comprehensive consideration of tissue cellular composition, providing it robust generalizability to previously unseen test sets, an area where all other existing approaches in the literature fail. The insights derived from our model can be used to predict prognosis, enhancing patient outcome.

Spatio-Temporal application of Heterogeneous Graph Neural Networks on cell surface data (top)

by Edward Offord

Cellular dynamics often encompass deformations of the cell membrane, presenting challenges in automatic detection and analysis within 4D microscopy imagery. Contemporary machine learning techniques grapple with the extensive size of these images, leading to substantial computational demands. This paper introduces a novel approach employing heterogeneous graph convolutional neural networks tailored to cellular timeseries data. Using spectral graph correspondence techniques, we establish relationships between distinct timeframes of cell surface data. Our primary objective centers on categorizing mesh nodes into various phases of filopodia lifecycle, which are actin-rich surface extensions. The proposed network exhibits robust performance, achieving an F1 score of 0.952 when classifying the life cycle of filopodia in simulated lung cancer cells. This provides significant understanding into the progression of these surface structures. Additionally, the techniques developed in this study show potential for adaptation to other cellular surface structures.

Dynamics of Brain Lateralization and Genetic Underpinnings during Chinese Natural Speech Processing: Preferential Occurrence of Logic-Related Patterns in Males and Emotion-Related Patterns in Females (top)

by Ruohan Zhang

Though language is considered unique to humans with left dominant lateralization in brains, the dynamic nature of the interplay between hemispheres during language processing remains largely unknown. Here, we investigated whole-brain functional dynamic lateralization patterns during Chinese language processing and potential sex disparities using functional MRI data of 20 subjects listening to narrative stories in a 7T MRI scanner. Our findings revealed two distinct dynamic lateralization states, with areas of the language system consistently exhibiting left-lateralized and converse lateralization patterns for both association areas and primary sensorimotor areas. These two states, characterized by areas of higher-order systems exhibiting left- or right-lateralization, corresponded to the processing of rational and emotional contents. We observed pronounced inclinations towards the former state in males and the latter state in females, especially during the processing of rational content. Finally, genetic analysis revealed that the sex differences in lateralization states were potentially influenced by sex hormones.

Robust and Efficient Detection of Mitotic Figures​ (top)

by Mostafa Jahanifar

Manual counting of mitotic figures, essential for cancer grading, is laborious and prone to discrepancies among pathologists. While automatic detection algorithms have emerged, many falter due to sensitivity to domain shifts and inadequate speed for clinical utility. We present a novel two-stage mitosis detection framework, which begins with a robust and fast segmentation model, EUNet, and then a result refinement stage with a deeper classifier. Both components utilize domain-invariant techniques to enhance reliability. Evaluations on three prominent datasets confirmed our method's unparalleled efficacy, with F1-scores of 0.785 and 0.781 on the MIDOD21 and TUPAC datasets, and recognition as the top performer in the MIDOG21-22 challenges. Given its superior speed and accuracy, our algorithm holds significant potential for clinical integration. Its capabilities are further illustrated by generating a comprehensive repository from the TCGA breast cancer cohort.

Computational Biology Posters

Multi-objective optimization for high-definition transcranial electrical stimulation of the human brain (top)

by Mo Wang

Transcranial electrical stimulation (tES) is a non-invasive neuromodulation technique with substantial potential for clinical applications, such as stroke treatment and motor function improvement. By injecting current through electrode pairs on the scalp surface, tES can modulate neural activities and avoid complications from intracranial stimulations. Designing a tES strategy requires considering multiple objectives, such as intensity in the target area, focality, stimulation depth, and avoidance zone. These objectives are often mutually exclusive. We propose a general framework, called multi-objective optimization via evolutionary algorithm (MOVEA), which solves the non-convex optimization problem in designing tES strategies without a predefined direction. MOVEA enables simultaneous optimization of multiple targets through Pareto optimization, generating a Pareto front after a single run without manual weight adjustment and allowing easy expansion to more targets. This Pareto front consists of optimal solutions that meet various requirements while respecting trade-off relationships between conflicting objectives such as intensity and focality.

Computer Security Posters

Revealing Ongoing Sensor Attacks in Industrial Control System via Setpoint Modification (top)

by Zhihao Dai

A variety of Intrusion Detection Systems (IDSs) for Industrial Control Systems have been proposed to detect attacks and alert operators. Passive and active detection schemes are characterised by whether or not they interact with the process under control, though both categories of approach have limitations relating to either known correlations in the process data or the use of explicit system modelling. We propose setpoint modification as a strategy to address those limitations. The approach superimposes Gaussian noises on setpoint values, which aids in revealing latent correlations between setpoints and measurements, thereby allowing machine learning-based IDSs to learn them during training and verify during inference. The proposed strategy demonstrates elevated detection performance while incurring marginal operational costs and opens up a new research direction, i.e., setpoint selection for optimal detection boost.

High-Performance Computing

OP-PIC – An Unstructured Mesh Particle in Cell Domain Specific Language (top)

by Zaman Lantra

Ensuring performance portability of Particle-in-Cell (PIC) applications on emerging, massively parallel, heterogeneous high-performance-computing hardware is becoming an increasing concern for future ambitious scientific research related to energy generation through nuclear fusion reactions. One solution involves the development of high-level abstractions, such as domain-specific languages (DSLs), which allow domain specialists to specify the problem while delegating the implementation to a lower level through automatic code generation techniques. However, DSLs for PIC codes, particularly unstructured-meshed PIC, have not seen widespread development in the HPC community. In our work, we introduce OP-PIC, a novel DSL for unstructured PIC calculations, leveraging source-to-source translation and code generation methods originally developed in the OP2 DSL. We showcase the performance and portability of OP-PIC on both CPUs and Nvidia GPUs, employing various parallelisation techniques, including sequential, OpenMP, MPI, CUDA, and CUDA+MPI. Additionally, we incorporate algorithmic optimisations tailored to the PIC particle push routine, specifically for Electrostatic simulations.

Automatic Synthesis of Structured-mesh Solvers on FPGAs using High-Level Abstractions (top)

by Beniel Thileepan

Structured-mesh class of applications is an important motif used in a range of numerical algorithms, such as in the solution of Partial Differential Equations. In recent work, Field-Programmable Gate Arrays (FPGAs) have shown promising performance in specific structured-mesh applications. However, programming FPGAs, even with the more recently introduced high-level synthesis tools require significant manual customizations and optimizations. In this work, we aim to bridge this gap by generating highly optimized FPGA synthesis code from a domain-specific language called OPS for structured-mesh applications. OPS supports generating parallel code for a range of architectures through a stencil computation-friendly syntax. We explore developing a new code-generation framework for FPGAs within OPS. In contrast to generating code for traditional parallel architectures, FPGAs require significant/radical transformations previously unsupported in OPS. We present work towards overcoming these challenges and automating the process.

GNN Embedding for taskflow allocation in MEC environment (top)

by Xuanyu Liu

GNN Embedding for taskflow allocation in MEC environment. we uses reinforcement learning (RL) and neural networks to learn taskflow allocation algorithms without any humanin struction beyond a high-level objective.we had to develop new representations for jobs’ dependency graphs, design scalable RL models, and invent RL training methods for dealing with continuous stochastic job arrivals.

Machine Learning

Source Code Plagiairsm Detection with Transformers (top)

by Fahad Ebrahim

Source Code Plagiarism Detection (SCPD) is a critical issue in computer science education, significantly impacting academic integrity. Given its resource-intensive nature, the partial automation of this task would greatly assist instructors in their work. SCPD can be denoted as a binary classification problem with two possible outcomes: 'yes' or 'no'. Pre-trained models (PTMs) or transformers represent the state-of-the-art in various Natural Language Processing (NLP) tasks. Consequently, domain-specific PTMs derived from source code (CodePTMs) have been developed to address Software Engineering aspects of code comprehension and generation. This work aims to utilise multiple CodePTMs for the task of plagiarism detection, with a focus on their embeddings, inferencing capabilities, and fine-tuning. Through various experiments, these models can be beneficial in the context of SCPD. But, certain limitations were observed as part of the evaluation such as limited token length, generalisability and explainability.

Reconstructing Training Data from Trained Neural Networks (top)

by Andrei Codreanu

In NeurIPS 2022, Haim et al. proposed a framework that allowed for the reconstruction of precise versions of the images used in the training dataset of certain trained neural networks. This can be used, for example, to extract the personal photos of someone, used to train a model, without their permission, with the aim of targeting that individual. However, the initial method suffers from a number of limitations, among which we enumerate the speed of producing a high-quality experiment, as well as the variety of high-quality reproduced images. To address these, we employed some of the tools used in the literature featuring different kinds of attacks against deep learning models. More precisely, in our work, we show how using some noise regularisation penalty, together with a generative adversarial network generator that reconstructs the images, allows one to obtain better reconstructions than the ones obtained by the previous method.

Enhancing Personalization with Graph Neural Networks in Federated Learning (top)

by Yan Qian

SFL (Structural Federated Learning) seamlessly integrates graph convolutional networks (GCN) into federated learning, paving the way for efficient knowledge sharing and personalized model training across decentralized clients. By harnessing both local model updates and the intrinsic relationship structures among clients, SFL facilitates a more nuanced and coherent global model aggregation. A hallmark of SFL lies in its capacity to learn and optimize adjacency matrices, encapsulating the interconnections between clients. Moreover, the approach employs multiple regularization techniques to fine-tune each client's model, mitigating the impact of data heterogeneity. This dual emphasis on structural relationships and personalized adjustments ensures that each client benefits from a model tailored to its unique data distribution. The fusion of GCN further offers a comprehensive perspective of the federated network, reinforcing model accuracy and resilience.

Distributionally Robust Routing Problems (top)

by Patrick O'Hara

Given a probability distribution over the cost function of edges in a graph, a stochastic routing optimisation problem finds a route in the graph that minimises a risk function of the route cost. In practise, it is rare that we have access to the true data-generating distribution over edge costs. Instead, we are given a set of empirical cost observations. From the observations, we can construct a (Bayesian) machine learning model of the costs, then do out-of-sample prediction with our model on new, unseen data. If the observations are noisy or in limited supply - or if our model is misspecified - we might not trust the posterior predictive distribution of the model. In this talk, we discuss routing optimisation algorithms that are robust with respect to an ambiguity set of distributions that are close to the predictive distribution of the model. Real-world applications to air pollution are demonstrated.

Efficient low-bit quantization for Graph Neural Networks (top)

by Rustam Guliyev

Graph Neural Networks (GNNs) are powerful tools for learning from graph-structured data but face scalability challenges due to large model sizes and the depth of networks, which can lead to efficiency and accuracy degradation. To address these issues in resource-constrained environments and avoid the oversmoothing problem in deep GNNs, we propose an integrated end-to-end solution for GNN quantization. The proposed quantization framework learns optimal quantization ranges for all stages of GNNs, from message passing during training to node classification during inference. This results in a compressed model that retains comparable accuracy even under low-bit quantization scenarios. We further discuss and measure the impact of graph partitioning and node clustering on GNN quantization and discuss the relationship of quantization error to the notorious oversmoothing problem.

Theory and Foundations

Simple Programs with NP-hard Termination (top)

by Henry Sinclair-Banks

I will introduce a really simple model of computation that is a heavily restricted counter machine. I will demonstrate the surprising computational power that the model possesses through examples. I aim to show you that deciding whether one of these programs terminates is an NP-hard problem.

Constant-depth circuits vs. monotone circuits (top)

by Bruno Pasqualotto Cavalar

We establish new separations between the power of monotone and general (non-monotone) Boolean circuits: - For every $k \geq 1$, there is a monotone function in ${\rm AC^0}$ (constant-depth poly-size circuits) that requires monotone circuits of depth $\Omega(\log^k n)$. This significantly extends a classical result of Okol'nishnikova (1982) and Ajtai and Gurevich (1987). - For every $k \geq 1$, there is a monotone function in ${\rm AC^0}[\oplus]$ (constant-depth poly-size circuits extended with parity gates) that requires monotone circuits of size $\exp(\Omega(\log^k n))$. This makes progress towards a question posed by Grigni and Sipser (1992). In the opposite direction, we show that the existence of significantly faster monotone simulations would lead to breakthrough circuit lower bounds. Finally, we revisit our separation result against monotone circuit size and investigate the limits of our approach.

Fault-Tolerant Kernel Oracles (top)

by Peter Strulo

Kernelization and fault-tolerant oracles are both models of parameterized preprocessing. This new model aims to generalize both. I will present an overview of the background and introduce fault-tolerant kernel oracles using vertex cover as a running example.

Zero Knowledge Proofs For All Of DP (top)

by Ari Biswas

Differential Privacy (DP) is advocated as a de facto standard for releasing aggregate statistics on sensitive data. However, in most implementations, we \highlight{require} the entity entrusted with releasing statistics to be honest, and assume that they will not deviate in any way to output malicious answers. To counteract this assumption, \cite{biswas2022verifiable} introduced the idea of \textit{Interactive Proofs For Differential Privacy}, and demonstrated efficient constructions for the binomial mechanism. Assuming one-way functions exist, in this work, we show that there exists interactive proofs for \emph{any} DP mechanism, and not just the binomial mechanism. We show this by constructing an efficient proof system for the exponential mechanism, which is a general technique known to express any DP mechanism \cite{mcsherry2007mechanism}. Furthermore, our construction, naturally extends to the multi-party setting, and is the first implementation of private selection with **active** security.

Distance-based robustness for Signal Temporal Logic(top)

by Neha Rino

Signal temporal logic (STL) is a logic that expresses both temporal and timing properties of signals, for example, both that the value of the signal increased _after_ a previous drop, and that the increase happened _5 seconds_ after said drop. When monitoring real-time systems, there is the quantitative matter of checking whether an execution satisfies its specification, and the qualitative question of measuring how well it satisfies the specification, (or conversely, how badly it fails the specification). Such quantitative semantics for STL are known as robustness measures. We define a new robustness measure for STL, that is based on the distance between a signal and the language of the specification under a suitable metric. We prove that it is at least NP-hard to compute in general, while providing a practical fragment of STL for which this robustness is efficiently computable.

Parity Index Hierarchy Collapses in Semantically-Deterministic Automata (top)

by Aditya Prakash

A non-deterministic automaton is said to be semantically deterministic (SD) if every state that can be reached from the initial state upon reading a fixed word recognises the same language. We consider the notion of semantic-determinism over parity automata, and show that the parity index hierarchy collapses. More specifically, we show that [1,2,3]- SD automata are as expressive as the class of omega-regular automata. We also show that [0,1,2]-SD automata are as expressive as deterministic automata with the same index ([0,1,2]). This generalises known results that [1,2] and [0,1]-SD automata are as expressive as [1,2] and [0,1]-deterministic automata respectively.