Usage Policies - Portal KAI
Usage Policies
Information Security Policy for the Use of Artificial Intelligence
We are guided by the Artificial Intelligence Governance Policy and the Corporate Policy on Ethics and Responsible Use of AI.
Our Goals?
To ensure the confidentiality, integrity, and availability of the information processed by AI systems, while preventing risks such as unauthorized access, data manipulation, or the misuse of technology.
Target Audience?
To all Keralty Group employees and third parties who use, develop, or interact with AI-based tools.
Corporate Policy on Ethics and Responsible Use of Artificial Intelligence
Our goal is to ensure that AI is used in a fair, transparent, and responsible manner, with a positive impact on people and society..
Target Audience?
To all AI solutions that are designed, developed, implemented, or acquired by Keralty, and to all employees, providers, strategic partners, and third parties involved in AI projects—at every stage of the systems’ lifecycle.
Policy on Innovation and Management of Artificial Intelligence Solutions
Our commitment is to ensure that every AI solution is auditable, explainable, and trustworthy, while promoting a culture of continuous improvement and centralized, integrated management.
Target Audience?
To all AI solutions, at any stage of their lifecycle, and to all individuals, teams, and partners involved in their design, development, implementation, or use within the organization.
Artificial Intelligence Governance Policy
Its purpose is to ensure that Artificial Intelligence contributes to improving services, optimizing processes, and strengthening decision-making—always in alignment with our corporate values and strategic objectives.
What Does This Policy Establish?
- Principles for ethical, transparent, and responsible use.
- Clear roles and responsibilities in AI management.
- Processes for regular audits and performance monitoring
- Mechanisms to align all AI initiatives with Keralty’s strategic vision.
To all AI solutions and to all individuals, teams, and third parties involved in their design, development, implementation, and operation throughout the entire lifecycle of the systems.
FAQ's
At Keralty, you can expect to find and use various AI solutions designed to optimize processes, improve services, and support informed decision-making, such as Gemini for Workspace tools. These solutions are developed or acquired following strict quality, security, and ethical guidelines.
Yes, Keralty has a robust regulatory framework for the use of AI. This includes the 'Corporate Policy on AI Governance at Keralty' (SIG-GD-CKE-PL09), the 'Corporate Policy on Ethics and Responsible Use of AI at Keralty' (SIG-GD-CKE-PL10), the 'Corporate Policy on Information Security for AI' (SIG-GD-CKE-PL11), and the 'Corporate Policy on Innovation and AI Solutions Management' (SIG-GD-CKE-PL12).
Keralty adheres to fundamental ethical principles in the use of AI: Transparency, Fairness and Non-Discrimination, Privacy and Data Protection, Accountability, Safety and Reliability, and Positive Social Impact. This ensures that our solutions are fair, understandable, and beneficial to society.
Information security is a top priority. Keralty integrates security and privacy from the design stage of AI systems, applying the principles of least privilege and need-to-know. Risk assessments are conducted before using any AI system, and data protection policies, applicable laws, and standards such as ISO/IEC TR 24028:2021 and ISO 42001:2023 are followed. Additionally, security events and incidents are continuously monitored and managed.
Keralty is committed to mitigating bias in AI models. Bias and fairness testing is mandatory at all stages of AI model development or operation (design, data collection, implementation, evaluation). Models are adjusted and reviewed to prevent discriminatory or unfair outcomes, ensuring fairness and non-discrimination.
Yes. Keralty promotes Algorithmic Transparency, which is the ability to explain and understand how an AI system makes decisions. Processes, algorithms, and decisions related to AI must be clearly and accessibly documented so that stakeholders can understand how automated decisions are made.
Responsibility for AI-related decision-making is clearly assigned to the leaders of each area. There is an AI governance committee responsible for ensuring compliance with policies and making key decisions. Keralty takes responsibility for the outcomes generated by AI systems and has mechanisms in place to address potential errors or negative impacts.
Keralty seeks to promote a culture of continuous innovation and centralized management of AI tools. This includes ensuring access to relevant data and technological resources, establishing controlled environments for proof of concept, fostering ongoing staff training, adopting iterative and agile approaches, and encouraging multidisciplinary and interinstitutional collaboration.
Yes. Keralty strictly adheres to Privacy and Data Protection principles. Data used to train and operate AI models is collected, processed, and stored responsibly, ensuring quality, accuracy, security, and confidentiality, while fully complying with data protection regulations (such as Law 1581 of 2012 in Colombia).
The development and use of AI are overseen by various roles and committees. The AI Program Oversight Committee supervises policy implementation and makes strategic decisions. The Corporate Information Security Management and the Data & Analytics Manager hold key responsibilities for ensuring the security, quality, and responsible use of AI.