publications
2025
- IEEE MASSBit-Flipping Attack Exploration and Countermeasure in 5G NetworkJoon Kim, Chengwei Duan, and Sandip RayIn 2025 IEEE 22nd International Conference on Mobile Ad-Hoc and Smart Systems (MASS), 2025
5G communication technology has become a vital component in a wide range of applications due to its unique advantages such as high data rate and low latency. While much of the existing research has focused on optimizing its efficiency and performance, security considerations have not received comparable attention, potentially leaving critical vulnerabilities unexplored. In this work, we investigate the vulnerability of 5G systems to bit-flipping attacks, which is an integrity attack where an adversary intercepts 5G network traffic and modifies specific fields of an encrypted message without decryption, thus mutating the message while remaining valid to the receiver. Notably, these attacks do not require the attacker to know the plaintext, and only the semantic meaning or position of certain fields would be enough to effect targeted modifications. We conduct our analysis on OpenAirInterface (OAI), an open-source 5G platform that fol- lows the 3GPP Technical Specifications, to rigorously test the real- world feasibility and impact of bit-flipping attacks under current 5G encryption mechanisms. Finally, we propose a keystream- based shuffling defense mechanism to mitigate the effect of such attacks by raising the difficulty of manipulating specific encrypted fields, while introducing no additional communication overhead compared to the NAS Integrity Algorithm (NIA) in 5G. Our findings reveal that enhancements to 5G security are needed to better protect against attacks that alter data during transmission at the network level.
@inproceedings{11206202, author = {Kim, Joon and Duan, Chengwei and Ray, Sandip}, booktitle = {2025 IEEE 22nd International Conference on Mobile Ad-Hoc and Smart Systems (MASS)}, title = {Bit-Flipping Attack Exploration and Countermeasure in 5G Network}, year = {2025}, volume = {}, number = {}, pages = {640-645}, keywords = {Costs;5G mobile communication;Semantics;Redundancy;Telecommunication traffic;Receivers;Smart systems;Encryption;Low latency communication;Payloads}, url = {https://arxiv.org/abs/2511.04882}, doi = {10.1109/MASS66014.2025.00104}, } - Intell.-Based Med.In-silo federated learning vs. centralized learning for segmenting acute and chronic ischemic brain lesionsJoon Kim, Hoyeon Lee, Jonghyeok Park, and 7 more authorsIntelligence-Based Medicine, 2025
Objectives: To investigate the efficacy of federated learning (FL) compared to industry-level centralized learning (CL) for segmenting acute infarct and white matter hyperintensity. Materials and methods: This retrospective study included 13,546 diffusion-weighted images (DWI) from 10 hospitals and 8421 fluid-attenuated inversion recovery (FLAIR) images from 9 hospitals for acute (Task I) and chronic (Task II) lesion segmentation. We trained with datasets originated from 9 and 3 institutions for Task I and Task II, respectively, and externally tested them in datasets originated from 1 and 6 institutions each. For FL, the central server aggregated training results every four rounds with FedYogi (Task I) and FedAvg (Task II). A batch clipping strategy was tested for the FL models. Performances were evaluated with the Dice similarity coefficient (DSC). Results: The mean ages (SD) for the training datasets were 68.1 (12.8) for Task I and 67.4 (13.0) for Task II. The frequency of male participants was 51.5 % and 60.4 %, respectively. In Task I, the FL model employing batch clipping trained for 360 epochs achieved a DSC of 0.754 ± 0.183, surpassing an equivalently trained CL model (DSC 0.691 ± 0.229; p < 0.001) and comparable to the best-performing CL model at 940 epochs (DSC 0.755 ± 0.207; p = 0.701). In Task II, no significant differences were observed amongst FL model with clipping, without clipping, and CL model after 48 epochs (DSCs of 0.761 ± 0.299, 0.751 ± 0.304, 0.744 ± 0.304). Few-shot FL showed significantly lower performance. Task II reduced training times with batch clipping (3.5–1.75 h). Conclusions: Comparisons between CL and FL in identical settings suggest the feasibility of FL for medical image segmentation.
@article{KIM2025100283, title = {In-silo federated learning vs. centralized learning for segmenting acute and chronic ischemic brain lesions}, journal = {Intelligence-Based Medicine}, volume = {12}, pages = {100283}, year = {2025}, issn = {2666-5212}, doi = {https://doi.org/10.1016/j.ibmed.2025.100283}, url = {https://www.sciencedirect.com/science/article/pii/S2666521225000870}, author = {Kim, Joon and Lee, Hoyeon and Park, Jonghyeok and Park, Sang Hyun and Lee, Myungjae and Sunwoo, Leonard and Kim, Chi Kyung and Kim, Beom Joon and Kim, Dong-Eog and Ryu, Wi-Sun}, keywords = {Federated learning, Image segmentation, Ischemic brain lesion, Machine learning}, }
2024
- arxivRandom Gradient Masking as a Defensive Measure to Deep Leakage in Federated LearningJoon Kim, and Sejin ParkPreprint, 2024
Federated Learning(FL), in theory, preserves privacy of individual clients’ data while producing quality machine learning models. However, attacks such as Deep Leakage from Gradients(DLG) severely question the practicality of FL. In this paper, we empirically evaluate the efficacy of four defensive methods against DLG: Masking, Clipping, Pruning, and Noising. Masking, while only previously studied as a way to compress information during parameter transfer, shows surprisingly robust defensive utility when compared to the other three established methods. Our experimentation is two-fold. We first evaluate the minimum hyperparameter threshold for each method across MNIST, CIFAR-10, and lfw datasets. Then, we train FL clients with each method and their minimum threshold values to investigate the trade-off between DLG defense and training performance. Results reveal that Masking and Clipping show near to none degradation in performance while obfuscating enough information to effectively defend against DLG.
@article{kim2024randomgradientmaskingdefensive, title = {Random Gradient Masking as a Defensive Measure to Deep Leakage in Federated Learning}, author = {Kim, Joon and Park, Sejin}, year = {2024}, eprint = {2408.08430}, archiveprefix = {arXiv}, primaryclass = {cs.LG}, url = {https://arxiv.org/abs/2408.08430}, doi = {10.48550/arXiv.2408.08430}, journal = {Preprint}, }