Search

Browse Subject Areas

For Authors

Submit a Proposal

AI Trust, Risk, and Security Management

Framework, Principles, and Practices

Edited by R. Karthick Manoj, S. Senthilnathan, S. Arunmozhi Selvi, T. Ananth Kumar, S. Balamurugan
Series: Leading-Edge Breakthroughs in Artificial Intelligence
Copyright: 2026   |   Expected Pub Date:1/30/2026
ISBN: 9781394392995  |  Hardcover  |  
402 pages
Price: $225 USD
Add To Cart

One Line Description
For industry practitioners, academic researchers, and governance professionals alike,
this book offers both clarity and depth in one of the most important domains
of modern technology. As AI matures, trust and risk management will define
its success—and this book lays the groundwork for achieving that vision.

Audience
For industry practitioners, academic researchers, and governance professionals alike,
this book offers both clarity and depth in one of the most important domains
of modern technology. As AI matures, trust and risk management will define
its success—and this book lays the groundwork for achieving that vision.

Description
As AI continues to permeate sectors ranging from healthcare to finance, ensuring that these systems are not only powerful but also accountable, transparent, and secure, is more critical than ever. This book offers a vital exploration into the intersection of trustworthiness, risk mitigation, and security governance in artificial intelligence systems, serving as a definitive guide for professionals, researchers, and policymakers striving to build, deploy, and manage AI responsibly in high-stakes environments. Using a comprehensive approach, it explores how to integrate technical safeguards, organizational practices, and regulatory alignment to manage the unique risks posed by AI, including algorithmic bias, data misuse, adversarial attacks, and opaque decision-making. The result is a strategic approach that not only identifies vulnerabilities, but also promotes resilient, auditable, and trustworthy AI ecosystems.
At its core, AI TRiSM is a forward-looking concept that embraces the realities of AI in production environments. The framework moves beyond traditional static models of governance to propose dynamic, adaptive controls that evolve alongside AI systems. Through real-world case studies, the book outlines how tools like model cards, bias audits, and zero-trust architectures can be embedded into the AI development lifecycle.
Readers will find the volume:
• Introduces concepts to stay ahead of regulations and build trustworthy AI systems that customers and stakeholders can rely on;
• Addresses security threats, bias, and compliance gaps to avoid costly AI failures;
• Explores proven frameworks and best practices to deploy AI responsibly and strategies to outperform;
• Provides comprehensive guidance through real-world case studies and contributions from industry and academia.

Back to Top
Author / Editor Details
R. Karthick Manoj, PhD is an Assistant Professor at the Academy of Maritime Education and Training Tamil Nadu, India, with more than 14 years of experience. His scholarly contributions include six national and twelve international journal articles, four patents, three books, ten book chapters, and more than fifteen conference presentations.

S. Senthilnathan, PhD is an Assistant Professor in the Department of Electronics and Communication Engineering in the School of Engineering and Technology at Christ University, Bangalore, India. His research interests include quantum dot cellular automata and quantum computing.

S. Arunmozhi Selvi, PhD is a Professor in the Holy Cross Engineering College, Anna University, Tamil Nadu, India with more than 15 years of research and teaching experience. She has published 30 articles in international journals and conference proceedings and written many book chapters.

T. Ananth Kumar, PhD is an Associate Professor in the Department of and Computer Science and Engineering, IFET College of Engineering, Tamil Nadu, India. He has authored one book, edited six books and several book chapters, and presented papers in various national and international journals and conferences.

S. Balamurugan, PhD is the Director of Research at iRCS, an Indian Technological Research and Consulting, Coimbatore India. He has published 100 books, 300 papers in international journals and conferences, and 300 patents. With 20 years of experience researching various cutting-edge technologies, he provides expert guidance in technology forecasting and decision making for leading companies and startups.

Back to Top

Table of Contents
Series Preface
Preface
Part I: Fundamentals of Trustworthy and Transparent AI
1. Creating Trustworthy AI: A Lifecycle Risk Management Framework

Satish Kumar S., Bharathi K., Vinod S., Rudhra S., Balaraman R. and Suresh A.
1.1 Introduction
1.2 Methodology
1.2.1 Risk Measurement
1.2.2 Strategies for Risk Minimization
1.2.3 Observation and Reporting
1.2.4 Administration
1.2.5 Risk Prioritization
1.2.6 AI Risks and Credibility
1.3 Research Contribution and Future Direction
1.4 Conclusion
References
2. Comprehensibility and Transparency of AI Systems with Applications
N. Hemalatha, R. Elavarasi, P. Gajalakshmi, N. Magadevi and D. Kadhiravan
2.1 Introduction
2.2 Methods for Explainability and Interpretability
2.2.1 Intrinsically Interpretable Models
2.2.2 Black Box Models
2.2.3 Model Agnostic Methods
2.2.4 Causal Models
2.2.5 Adversarial Examples
2.2.6 Non-Agnostic Methods
2.3 Methodologies
2.3.1 Titanic Dataset Using the Decision Tree Model
2.3.1.1 Step-by-Step Real-Time Application for Customer Churn Prediction
2.3.2 Predictive Maintenance and Condition-Based Monitoring (CBM) for Maritime Drive Systems Using Machine Learning Techniques
2.3.2.1 Predictive Maintenance in Ship’s Gas Turbine Propulsion System
2.3.2.2 Dataset Observation in Maritime Propulsion Systems
2.3.2.3 Exploratory Data Analysis (EDA)
2.3.2.4 Feature Engineering
2.3.2.5 Model Selection and Training
2.3.3 AI in Diagnostics and Treatment Recommendations
2.3.3.1 IBM Watson for Oncology and Other Diagnostic Tools
2.3.3.2 Predicting Diabetes Risk Using LIME
2.3.3.3 Impact on Healthcare Professionals and Patients
2.4 Challenges in Explainability and Interpretability
2.5 Techniques for Improving Transparency
2.6 Conclusion and Future Work
2.6.1 Summary of Key Takeaways
2.6.2 The Importance of Building Trust in AI Systems through Explainability and Interpretability
2.6.3 Final Thoughts on the Evolving Role of
Across Critical Industries
References
3. Leveraging Correlation Analysis for Effective Feature Selection in AI Model Development
Raju Arumugam
3.1 Introduction
3.1.1 Review of the Literature
3.1.2 Overview of AI and Machine Learning Models
3.1.3 Importance of Feature Selection
3.2 Feature Selection through the Correlation Method
3.2.1 Understanding Correlation Analysis
3.2.2 Basics of Correlation
3.2.2.1 Positive Correlation
3.2.2.2 Negative Correlation
3.2.2.3 No Correlation
3.2.3 Correlation Coefficients
3.2.4 Interpretation of the Correlation Results
3.2.4.1 Strong Correlation
3.2.4.2 Moderate Relationships
3.2.4.3 Weak or No Relationship
3.3 Role of Correlation Analysis in Feature Selection
3.3.1 Identifying Relevant Features
3.3.2 Eliminating Redundant Features
3.4 Correlation Techniques for Heterogeneous Data
3.4.1 Pearson Correlation
3.4.2 Spearman and Kendall Correlation
3.4.3 Choosing the Right Correlation Method
3.5 Correlation-Based Feature Selection in Practice
3.5.1 Preprocessing Step in Machine Learning Pipelines
3.5.2 Identifying and Removing Duplicated Features
3.5.3 Simplifying the Model
3.5.4 Improving Model Interpretability
3.5.5 Applications in Various Domains
3.5.5.1 Healthcare: Patient Characteristic Identification and Its Influence on Illness
Outcomes
3.5.5.2 Finance: Selecting Economic Indicators for Forecasting Stock Markets
3.5.5.3 Marketing: Consumer Behavior and Purchase Decision Factors
3.6 Leveraging Correlation Analysis for Effective Feature Selection in AI Model Development
3.6.1 Introduction to Correlation Analysis in Feature Selection
3.6.2 Applying Correlation in Decision Trees and Random Forests
3.6.3 Correlation Analysis in Deep Learning
3.6.4 Combining Correlation with Other Feature Selection Methods
3.6.5 Applications of Correlation-Based Feature Selection
3.6.6 Limitations and Best Practices for Correlation-Based Feature Selection
3.7 Challenges and Limitations of Correlation Analysis in Feature Selection
3.7.1 Nonlinear Relationships
3.7.2 Overcoming Limitations
3.7.2.1 Hybrid Approaches for Enhanced Feature Selection
3.7.2.2 Application of Hybrid Methods in Model Building
3.8 Best Practices for Leveraging Correlations in Feature Selection
3.8.1 General Guidelines
3.8.2 Handling Multicollinearity
3.8.3 Ensuring Model Robustness and Interpretability
3.9 Future Work
References
4. Fusion-Based CNN Ensemble with Grad-CAM for Trustworthy and Transparent Plant Disease Detection
G. Abirami and S. Aasha Nandhini
4.1 Introduction
4.1.1 Importance of Early Plant Disease Detection
4.1.2 Rise of AI and Deep Learning in Agriculture
4.1.3 The Need for Explainable AI in Agriculture
4.1.4 Role of Ensemble Learning in Robust Classification
4.1.5 Real-World Challenges in Plant Disease Diagnosis
4.1.6 Research Objectives and Contributions
4.1.7 Structure of the Paper
4.2 Proposed Methodology
4.2.1 Input Image Acquisition
4.2.2 Preprocessing
4.2.3 Data Augmentation
4.2.4 CNN Backbone Architectures
4.2.5 Feature Extraction and Fusion
4.2.6 Ensemble Prediction Module
4.2.7 Explainability Using Grad-CAM
4.2.8 Output Interpretation and Agronomic Decision Support
4.2.9 Summary of Methodological Strengths
4.3 Experimental Setup
4.3.1 Dataset Description
4.3.2 Data Splitting Strategy
4.3.3 Experimental Pipeline
4.4 Results and Discussion
4.4.1 Classification Performance
4.4.2 Confusion Matrix Analysis
4.4.3 ROC Curve Evaluation
4.4.4 Visual Explanations Using Grad-CAM
4.4.5 Robustness under Variable Conditions
4.4.6 Cross-Validation and Stability Analysis
4.4.7 Comparative Analysis
4.5 Conclusion and Future Work
References
5. Case Studies and Applications of Explainability and Interpretability in AI Models
P. Gajalakshmi, N. Hemalatha, R. Elavarasi, N. Magadevi and D. Kadhiravan
5.1 Introduction
5.2 SHAP (SHapley Additive Explanations)
5.2.1 SHAP Value Formula
5.2.2 Simplified Computation in Practice
5.2.3 Key Features of SHAP
5.2.4 Example Case Study: AI for Heart Disease Risk Prediction
5.3 Finance: Credit Scoring and Fraud Detection
5.3.1 Role of AI in Credit Scoring and Fraud Detection
5.3.2 Case Study: AI Models in Banking and Finance
5.3.3 Explainability Challenges in Finance
5.3.4 Approaches to Improve Interpretability
5.3.5 Autonomous Vehicles: Decision-Making in Critical Situations
5.3.6 Ethical Concerns and Decision-Making in Critical Situations
5.4 Interpretability Techniques
5.4.1 Impact on Public Safety, Accountability, and Legal Implications
5.4.2 Legal Systems: Sentencing Recommendations and Risk Assessments
5.4.3 Case Study: COMPAS and Similar Tools Used in Courts
5.4.4 Explainability Issues in Legal AI Models (e.g., Bias, Fairness)
5.4.5 Approaches for Enhancing Transparency (e.g., LIME, SHAP, Post-Hoc Analysis)
5.4.6 Common Challenges in Improving Explainability and Interpretability Across Sectors
5.4.7 Future Directions for Research and Development in AI Transparency
5.4.8 The Role of Regulations and Ethical Guidelines in Shaping AI Systems
5.5 Conclusion
References
Part II: Privacy-Preserving and Secure AI Systems
6. Privacy-Preserving AI Techniques: Protecting Data in the Age of AI

N. Ram Shankar, S. Suhasini, M. Aravind Adityaa, B. Charan Sai, R. Deekshit, D. Derrick Nathaniel and K. Manikandan
6.1 Introduction
6.1.1 The Importance of Privacy in AI
6.2 Key Privacy-Preserving Techniques
6.2.1 Differential Privacy
6.2.2 An Apple Differential Privacy-Implementation
6.2.3 Google’s Federated Learning in Gboard
6.2.4 Homomorphic Encryption
6.2.5 IBM’s Homomorphic Encryption for Healthcare
6.3 Secure Multi-Party Computation (SMPC)
6.3.1 Privacy-Preserving Clinical Trials
6.3.2 Challenges and Future Directions
6.3.3 Healthcare
6.3.4 Finance
6.3.5 The Crime of Fraud Detection with Differential Privacy
6.3.6 Marketing
6.3.7 Ethical Considerations in Privacy-Preserving AI
6.3.8 Future Trends in Privacy-Preserving AI
6.3.9 The Global Perspective on Privacy-Preserving AI
6.3.10 The Role of Education and Awareness
6.3.11 The Impact of Emerging Technologies
6.4 Summary
6.5 Conclusion: Charting a Responsible Path Forward
References
7. Federated Learning for Early Detection of Chronic Diseases: Privacy-Preserving Models in Population Health Management
A.V. Sriharsha and Sai Nomitha Yarabolu
7.1 Introduction
7.1.1 Background and Importance of Chronic Disease Management
7.1.2 Challenges in the Early Detection of Chronic Diseases
7.1.3 Role of AI in Population Health Management
7.1.4 Privacy Concerns in Healthcare Data
7.2 Literature Review
7.2.1 Overview of Federated Learning (FL)
7.2.1.1 What is Federated Learning
7.2.1.2 Key Features of Federated Learning
7.2.2 Comparison with Centralized Machine Learning Models
7.2.3 Privacy Considerations in Health Data
7.2.4 Research Gaps in Existing Work
7.2.4.1 Limitations of Current Privacy-Preserving Approaches
7.2.4.2 Challenges in Federated Learning for Healthcare
7.3 Proposed Methodology
7.3.1 Federated Learning Framework
7.3.2 Applications in Chronic Disease Management
7.3.3 Theoretical Framework for Early Detection of Chronic Diseases
7.3.3.1 Conceptual Model Architecture
7.3.3.2 Conceptual Framework and Data Simulation
7.3.3.3 Model Architecture and Training Protocol
7.3.3.4 Privacy-Preserving Techniques
7.4 Results and Observations
7.4.1 Federated Learning Framework Overview
7.4.2 Model Accuracy and Scalability
7.4.3 Visualization of Results
7.4.4 Comparison with Previous Research
7.4.5 Performance Analysis of Federated vs. Traditional Models
7.5 Conclusion
7.5.1 Summary of Findings
7.5.1.1 Federated Learning’s Role in Chronic Disease Prevention
7.5.1.2 Implications for Population Health Management
7.5.2 Contributions of the Study
7.5.3 Limitations and Challenges
7.5.3.1 Data Variability and Non-IID Challenges
7.5.3.2 Communication Challenges
7.5.3.3 Tailoring Models for Specific Needs
7.5.3.4 Regulatory and Ethical Compliance
7.5.4 Future Directions
7.5.4.1 Advancements in Federated Learning
7.5.4.2 Integrating Enhanced Privacy Measures
7.5.4.3 Expanding Use Cases in Healthcare
References
8. Secure and Trustworthy AI for Efficient Diabetic Retinopathy Screening with Deep Learning Model
S. Sreedevi, K. Sarmila Har Beagam, G. Ezhilarasi and D. Lakshmi
8.1 Introduction
8.2 Related Works
8.3 Methodology
8.4 Results and Discussion
8.5 Conclusion
References
9. Addressing Security Challenges in AI-Driven Cyber Security: Enhancing Resilience While Fostering Sustainable Practices with Green Computing
P. Geetha, G. Abirami, T. Padmavathy, S. Sivagami and D. Vinodha
9.1 Introduction
9.1.1 Goal of the Work
9.2 Review Study
9.3 How the Green Computing Initiatives to Promote Sustainability Through Cyber Security Algorithms
9.3.1 Cyber Security Algorithms to Attain Green Computing
9.3.1.1 Encryption Algorithms
9.3.1.2 Hashing Algorithms
9.3.1.3 Lightweight Cryptography
9.3.1.4 Intrusion Detection Algorithms
9.3.1.5 Data Compression Algorithms
9.4 Conclusion
References
Part III: AI in Smart Healthcare, Agriculture and Energy and Power Systems
10. Enhancing Breast Cancer Health Care Using Vision Transformer Processing with Dingo Optimization

S. Baulkani and Koushalya S.
10.1 Introduction
10.1.1 Breast Cancer as a Public Health Concern
10.1.2 The Role of Deep Learning in Breast Cancer Diagnosis
10.1.3 Challenges in Deep Learning-Based Classification
10.1.3.1 Dataset Limitations
10.1.3.2 Labeling Issues
10.1.3.3 Model Interpretability
10.1.3.4 Computational Constraints
10.1.4 Future Directions in Breast Cancer Classification
10.1.4.1 Explainable AI (XAI)
10.1.4.2 Integration with Radiomics
10.1.4.3 Multi-Modal Learning
10.1.4.4 Federated Learning
10.2 Literature Review
10.2.1 DL Models for Classifying Breast Cancer
10.2.2 Assessment of Datasets for Deep Learning Models
10.2.3 Challenges in DL-Based BC Classification: Present and Future
10.2.4 Image Classification for BC
10.2.5 Deep Learning Models’ Accomplishments in BC Image Classification
10.2.5.1 Risk Factors for the Development of Breast Cancer
10.2.5.2 Breast Cancer Diagnosis and Treatment
10.2.5.3 Improvements in the Prognosis of BC
10.3 Proposed Methodology
10.3.1 Vision Transformer for Breast Cancer Detection
10.3.1.1 Advantages of Vision Transformers on the DDSM Dataset
10.3.2 Dingo Optimization for Mammogram Image Processing
10.3.2.1 Optimization Process
10.3.3 Comparison of Optimizers on Mammogram Image Classification
10.3.3.1 Expectations for Healthcare Applications
10.4 Experimental Results
10.4.1 Understanding ROC Curves
10.5 Conclusion
References
11. Enhancing Biometric Identification: A Trustworthy Framework for Toddler Iris Recognition through AI Innovations
Ramesh S. and V. Krishnaveni
11.1 Introduction
11.2 Literature Survey
11.3 Proposed Methodology
11.3.1 Image Acquisition
11.3.2 GAN Architecture as Preprocessing Stage
11.3.3 Segmentation with Dense U-Net
11.3.4 Swin Transformer-Based Feature Extraction
11.3.5 Pattern Matching with Triplet Loss Function
11.3.6 Decision Matching
11.4 Results and Discussions
11.5 Conclusion and Future Work
References
12. AI-Enhanced Reactive Power Compensation in Weak Grids Integrating Wind Energy Systems: A Trustworthy and Risk-Managed Approach
R. Rajasree, D. Lakshmi, K. Stalin and R.K. Padmashini
12.1 Introduction to Wind Energy Systems
12.2 Trustworthy AI for Power Grid Operations
12.3 AI-Driven Risk Management in Reactive Power Control
12.3.1 Predictive AI Models for Wind Energy and Reactive Power Demand
12.3.2 AI-Based Optimization for Dynamic Reactive Power Compensation
12.3.3 Model Predictive Control (MPC) Enhanced by AI for Grid Stability
12.4 Risk-Based AI Algorithms for Power System Stability
12.4.1 Cybersecurity and Resilience of AI Systems in Power Grids
12.4.2 AI for Energy Storage and Demand Response in Reactive Power Control
12.4.3 Hybrid Control Systems for Reactive Power Compensation
12.5 Case Study - Enhancing Grid Resilience with AI-Optimized FACTS Device in Wind Driven Weak Grid Networks
12.5.1 Abstract
12.5.2 Introduction
12.5.3 Proposed Methodology
12.5.4 Results and Discussion
12.5.5 Conclusion
12.6 Final Thoughts on the Role of AI Optimized FACTS Device for the Enhancement of Wind Integrated Weak Grid Systems
References
13. AI-Based Frequency Regulation for a Deregulated Two-Area Power System
D. Lakshmi, V. Pramila, S. Aasha Nandhini and R. Rajasree
13.1 Introduction
13.2 Two-Area Deregulated Power System
13.2.1 Deregulated Power System
13.2.2 Formation of DPM
13.2.3 Wind System Modeling
13.2.4 Simulation Diagram of the Two-Area Power System
13.3 Controllers
13.3.1 PI Controller
13.3.2 Artificial Optimization Algorithm-Based PI
13.3.3 AI-Based Flower Pollination Algorithm
13.4 Case Studies
13.5 Conclusion
References
Part IV: Real-World AI Applications and Future Opportunities
14. Smart Defense Vehicle (Bot) with AI-Assisted Security System

V. Sridevi and S. Priya
14.1 Introduction
14.2 Related Works
14.3 Existing System
14.3.1 Military Vehicle
14.3.2 Unmanned Ground Vehicle
14.3.3 Disadvantages
14.4 Proposed System
14.4.1 Dynamic Time Wrapping (DTW)
14.4.2 GPS – Global Positioning System
14.5 Hardware Implementation
14.5.1 Raspberry Pi 3 – Model B
14.5.2 Web Camera
14.5.3 Relay
14.5.4 Arduino Uno Microcontroller
14.5.5 Battery
14.5.6 DC Motor
14.5.7 Motor Controller for Steering
14.5.8 PWM (Engine) Motor Controller
14.5.9 V3.1 Voice Module
14.5.10 NEO 6M GPS Module
14.6 Results and Discussion
14.7 Conclusion
14.7.1 Future Scope
References
15. Smart Motor Fault Detection Leveraging LabVIEW and IoT Integration
Vinoth Kumar P., Priya S., Prakash S., Gunapriya D. and Sridevi V.
15.1 Introduction
15.2 Proposed Work
15.2.1 Design of Proposed Work Fault Detection System
15.3 Simulation Model of LabVIEW
15.4 Experimental Results
15.4.1 Simulation Results
15.4.2 Hardware Results
15.4.3 Blynk Output
15.5 Conclusion
References
Index

Back to Top



Description
Author/Editor Details
Table of Contents
Bookmark this page