Updated on 2025/10/09

Information

 

写真a

 
AHMAD GENDIA
 
Organization
Faculty of Information Science and Electrical Engineering Department of Advanced Information Technology Assistant Professor
Title
Assistant Professor

Papers

  • Next-Gen UAV-Satellite Communications: AI Innovations and Future Prospects Reviewed International coauthorship International journal

    Hashima S., Gendia A., Hatano K., Muta O., Nada M.S., Mohamed E.M.

    IEEE Open Journal of Vehicular Technology   6   1990 - 2021   2025   eISSN:2644-1330

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:IEEE Open Journal of Vehicular Technology  

    The convergence of sixth-generation (6G) networks with unmanned aerial vehicles (UAVs) and satellites is poised to introduce substantial improvements to the landscape of wireless communication, paving the way for a unified and uninterrupted space-air-ground-sea network that ensures comprehensive global connectivity. At the heart of this transformative paradigm lies artificial intelligence (AI), which drives innovation across diverse sectors by enhancing decision-making autonomy, enabling real-time data processing, and optimizing network performance and coverage. This survey paper explores AI-enabled UAV-satellite communications for 6G applications, focusing on its challenges, potential, and future. This new system combines the strengths of 6G networks, UAVs (advanced drones), and satellites. It opens up new possibilities in precision agriculture, disaster management, enhanced telecommunication services, and remote sensing. Despite its promise, this field faces complex challenges. These include spectrum management, security risks, regulatory barriers, and integrating AI operations seamlessly. This paper comprehensively analyzes these challenges, offering innovative solutions and outlining future research directions to unlock the complete capabilities of 6G-enabled UAV-satellite communications. Furthermore, it includes a case study demonstrating the effectiveness of multi-armed bandit (MAB) algorithms in optimizing resource allocation and decision-making processes for UAV-low Earth orbit (LEO) satellite communication scenarios, showcasing significant improvements in network performance. This work lays the foundation for a new generation of ultra-connected, data-driven applications that will redefine global connectivity and technological advancement by addressing these critical aspects.

    DOI: 10.1109/OJVT.2025.3587028

    Web of Science

    Scopus

  • Energy-Efficient Trajectory Planning With Joint Device Selection and Power Splitting for mmWaves-Enabled UAV-NOMA Networks Reviewed International journal

    Gendia Ahmad, Muta Osamu, Hashima Sherief, Hatano Kohei

    IEEE Transactions on Machine Learning in Communications and Networking   2   617 - 632   2024   eISSN:2831-316X

     More details

    Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:Institute of Electrical and Electronics Engineers (IEEE)  

    This paper proposes two energy-efficient reinforcement learning (RL)-based algorithms for millimeter wave (mmWave)-enabled unmanned aerial vehicle (UAV) communications toward beyond-5G (B5G). This can be especially useful in ad-hoc communication scenarios within a neighborhood with main-network connectivity problems such as in areas affected by natural disasters. To improve the system’s overall sum-rate performance, the UAV-operated mobile base station (UAV-MBS) can harness non-orthogonal multiple access (NOMA) as an efficient protocol to grant ground devices access to fast downlink connections. Dynamic selection of suitable hovering spots within the target zone where the battery-constrained UAV needs to be positioned as well as calibrated NOMA power control with proper device pairing are critical for optimized performance. We propose cost-subsidized multiarmed bandit (CS-MAB) and double deep Q-network (DDQN)-based solutions to jointly address the problems of dynamic UAV path design, device pairing, and power splitting for downlink data transmission in NOMA-based systems. To verify that the proposed RL-based solutions support high sum-rates, numerical simulations are presented. In addition, exhaustive and random search benchmarks are provided as baselines for the achievable upper and lower sum-rate levels, respectively. The proposed DDQN agent achieves 96% of the sum-rate provided by the optimal exhaustive scanning whereas CS-MAB reaches 91.5%. By contrast, a conventional channel state sorting pairing (CSSP) solver achieves about 89.3%.

    DOI: 10.1109/TMLCN.2024.3396438

    Web of Science

    CiNii Research

  • Cache-enabled reinforcement learning scheme for power allocation and user selection in opportunistic downlink NOMA transmissions Reviewed International coauthorship International journal

    Gendia A., Muta O., Nasser A.

    IEEJ Transactions on Electrical and Electronic Engineering   17 ( 5 )   722 - 731   2022.5   ISSN:19314973 eISSN:1931-4981

     More details

    Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:Ieej Transactions on Electrical and Electronic Engineering  

    Non-orthogonal multiple access (NOMA) allows multiple user equipment (UE) to simultaneously share the same resource blocks using varying levels of transmit power at the base station (BS) side. Proper allocation of transmission power and selection of candidate users for pairing over the same resource block are critical for an efficient utilization of the available resources. Optimal UE selection and power splitting among paired UEs can be made through an exhaustive search over the space of all possible solutions. However, the cost incurred by such approach can render it practically infeasible. Reinforcement learning (RL) deploying double deep-Q networks (DDQN) is a promising framework that can be adopted for tackling the problem. In this article, an RL-based DDQN scheme is proposed for user pairing in opportunistic access to downlink NOMA systems with capacity-limited backhaul link connections. The proposed algorithm relies on proactive data caching to alleviate the throttling caused by backhaul bottlenecks, and optimized UE selection and power allocation are accomplished through the continuous interaction between an RL agent and the NOMA environment to increase the overall system throughput. Simulation results are presented to showcase the near-optimal strategy achieved by the proposed scheme. © 2022 Institute of Electrical Engineers of Japan. Published by Wiley Periodicals LLC.

    DOI: 10.1002/tee.23560

    Web of Science

    Scopus

    CiNii Research

  • Ofdm papr reduction via time-domain scattered sampling and hybrid batch training of synchronous neural networks Reviewed International journal

    Gendia A., Muta O.

    Electronics Switzerland   10 ( 14 )   2021.7   eISSN:2079-9292

     More details

    Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:Electronics Switzerland  

    Peak-to-average power ratio (PAPR) reduction in multiplexed signals in orthogonal frequency division multiplexing (OFDM) systems has been a long-standing critical issue. Clipping and filtering (CF) techniques offer good performance in terms of PAPR reduction at the expense of a relatively high computational cost that is inherent in the repeated application of fast Fourier transform (FFT) operations. The ever-increasing demand for low-latency operation calls for the development of low-complexity novel solutions to the PAPR problem. To address this issue while providing an enhanced PAPR reduction performance, we propose a synchronous neural network (NN)-based solution to achieve PAPR reduction performance exceeding the limits of conventional CF schemes with lower computational complexity. The proposed scheme trains a neural network module using hybrid collections of samples from multiple OFDM symbols to arrive at a signal mapping with desirable characteristics. The benchmark NN-based approach provides a comparable performance to conventional CF. However, it can underfit or overfit due to its asynchronous nature which leads to increased out-of-band (OoB) radiations, and deteriorating bit error rate (BER) performance for high-order modulations. Simulations’ results demonstrate the effectiveness of the proposed scheme in terms of the achieved cubic metric (CM), BER, and OoB emissions.

    DOI: 10.3390/electronics10141708

    Web of Science

    Scopus

  • Observing Changes in Pore Pressure Using Controlled Permanent Seismic Source and Distributed Seismometer Network Reviewed International coauthorship

    Imam T., Tsuji T., Gendia A.

    6th Asia Pacific Meeting on Near Surface Geoscience and Engineering Smart Technologies Kind to the Planet   2024   ISBN:9789462824997

     More details

    Authorship:Last author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:6th Asia Pacific Meeting on Near Surface Geoscience and Engineering Smart Technologies Kind to the Planet  

    We employed a permanent active seismic source and seismic array to investigate temporal variations in seismic wave velocity through the Earth’s crust. Monitoring crustal changes is crucial for understanding earthquakes, volcanic eruptions, and fluid dynamics. This study focused on enhancing sensitivity to travel time variations, using extended stacking periods and capturing signal propagation beyond 80 km with frequencies between 15.11 and 22.11 Hz. Over 4.5-months, monitoring data revealed travel time variations related to rainfall and earthquakes with uncertainty estimates of 0.016% to 0.07% for proximate and remote seismometers. This precision allowed the detection of changes linked to pore pressure and fluid saturation despite high-frequency source constraints. The study establishes a comprehensive link between seismic activities and environmental events, offering valuable insights for fields like Carbon Capture and Storage (CCS). Understanding earthquake origins, whether natural or induced, is crucial for monitoring induced seismicity. The findings provide insights for further studies in geological and environmental monitoring, showcasing the effectiveness of the permanent seismic source in monitoring temporal travel time changes over distances exceeding 80 km with high accuracy.

    DOI: 10.3997/2214-4609.202471087

    Scopus

  • Deep Reinforcement Learning Based Computing Resource Allocation in Fog Radio Access Networks Reviewed International coauthorship

    Tong Z., Li Z., Gendia A., Muta O.

    IEEE Vehicular Technology Conference   2024   ISSN:15502252 ISBN:9798331517786

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE Vehicular Technology Conference  

    The integration of artificial intelligence (AI) with fog radio access networks (F-RANs) has garnered great interest, primarily motivated by the needs for efficient network operation and for ensuring high service availability. Fog access points (F-APs) can help with computation offloading and thereby alleviate the huge computational burdens of terminal devices in F-RANs. However, the overall system energy consumption must to be minimized. As described herein, we propose a computation offloading strategy for industrial internet-of-things (IIoT) devices that is centered around deep reinforcement learning (DRL) based user and F-AP association, which can learn high-dimensional data and which can respond to dynamic changes in the environment. The proposed DRL model adopts a framework that deploys the agent at the user side to address the challenge of high dimensionality in the action space. Specifically, each IIoT device is assigned a dedicated DRL model within the framework, facilitating the identification of an appropriate F-AP based on the environment state. Once the user and F-AP association process is completed, a computationally efficient greedy algorithm is used at each FAP, considering the limited capability, aiding in determining the subset of offloading requests that should be forwarded to the cloud for additional processing. The simulation results showcase the superior performance of the proposed DRL algorithm over traditional algorithms, including the random algorithm and the greedy algorithm, in terms of energy consumption. Under the same operation time, DRL also outperforms the genetic algorithm.

    DOI: 10.1109/VTC2024-Fall63153.2024.10757816

    Scopus

  • UAV Positioning with Joint NOMA Power Allocation and Receiver Node Activation Reviewed

    Gendia A., Muta O., Hashima S., Hatano K.

    IEEE International Symposium on Personal Indoor and Mobile Radio Communications PIMRC   2022-September   240 - 245   2022   ISSN:2166-9570 ISBN:9781665480536

     More details

    Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE International Symposium on Personal Indoor and Mobile Radio Communications PIMRC  

    This paper proposes reinforcement learning (RL)-based solutions for unmanned aerial vehicle (UAV) data offloading in B5G mmWave-enabled communications. This is particularly useful for ad-hoc transmission scenarios within environments experiencing connectivity issues with the main servicing network as in disaster-stricken areas. Double deep Q-network and multiarmed bandit-based algorithms are proposed to tackle the joint problem of UAV-positioning and Rx-node activation and power allocation for data offloading in downlink NOMA transmissions. Numerical simulations are performed to ensure the proposed RL-based algorithms can adequately provide high data transfer rates, along with random and exhaustive search solutions as benchmarks for lower and upper bounds on the achievable sum-rate levels.

    DOI: 10.1109/PIMRC54779.2022.9978021

    Web of Science

    Scopus

    CiNii Research

▼display all

Research Projects

  • Virtual MIMO-NOMA Design and Relay-Assisted Cooperative NOMA Using AI-Based Cross-Layer Optimization

    Grant number:25K23457  2025.7 - 2027.3

    Grants-in-Aid for Scientific Research  Grant-in-Aid for Research Activity Start-up

    Ahmad Gendia

      More details

    Grant type:Scientific research funding

    This research is about improving various aspects of future NOMA networks by integrating various technologies including virtual MIMO and IRS-aided design. Outage events analysis and data-driven operation for tuned NOMA resource management are also targeted. Machine learning methods will be explored.

    CiNii Research