At Keralty, we believe in the power of transformation.

Usage Policies

We are redefining the future of healthcare with transformative and responsible Artificial Intelligence. Our comprehensive policy framework guides our commitment to AI that drives innovation and efficiency, grounded in principles of governance, ethics, and information security.These pillars are essential to ensuring our leadership in the sector, making sure that every advancement aligns with our values, and that we act with integrity and excellence..
Responsive Image
Information Security Policy for the Use of Artificial Intelligence
At Keralty, we promote the safe, ethical, and responsible use of Artificial Intelligence (AI). This policy outlines the guidelines to protect the information and systems involved in the use of AI, in alignment with our internal policies and international standards.

We are guided by the Artificial Intelligence Governance Policy and the Corporate Policy on Ethics and Responsible Use of AI.

Our Goals?
To ensure the confidentiality, integrity, and availability of the information processed by AI systems, while preventing risks such as unauthorized access, data manipulation, or the misuse of technology.

Target Audience?
To all Keralty Group employees and third parties who use, develop, or interact with AI-based tools.
Corporate Policy on Ethics and Responsible Use of Artificial Intelligence
At Keralty, we believe that Artificial Intelligence should serve human well-being and values. That’s why this policy defines the ethical principles and operational guidelines that govern the development, implementation, and use of AI solutions within our organization..

Our goal is to ensure that AI is used in a fair, transparent, and responsible manner, with a positive impact on people and society..

Target Audience?
To all AI solutions that are designed, developed, implemented, or acquired by Keralty, and to all employees, providers, strategic partners, and third parties involved in AI projects—at every stage of the systems’ lifecycle.
Responsive Image
Responsive Image
Policy on Innovation and Management of Artificial Intelligence Solutions
At Keralty, we promote responsible innovation as a driver of transformation. This policy sets forth the key guidelines for the management of artificial intelligence solutions, ensuring that every tool meets the highest standards of quality, safety, ethics, and transparency.

Our commitment is to ensure that every AI solution is auditable, explainable, and trustworthy, while promoting a culture of continuous improvement and centralized, integrated management.

Target Audience?
To all AI solutions, at any stage of their lifecycle, and to all individuals, teams, and partners involved in their design, development, implementation, or use within the organization.
Artificial Intelligence Governance Policy
At Keralty, we understand that the use of Artificial Intelligence must be guided by strong principles and a strategic framework that ensures its ethical and effective application. This policy sets the rules and guidelines for the development, implementation, and responsible use of AI solutions across the organization.

Its purpose is to ensure that Artificial Intelligence contributes to improving services, optimizing processes, and strengthening decision-making—always in alignment with our corporate values and strategic objectives.

What Does This Policy Establish?
  • Principles for ethical, transparent, and responsible use.
  • Clear roles and responsibilities in AI management.
  • Processes for regular audits and performance monitoring
  • Mechanisms to align all AI initiatives with Keralty’s strategic vision.
Target Audience?
To all AI solutions and to all individuals, teams, and third parties involved in their design, development, implementation, and operation throughout the entire lifecycle of the systems.

 
Responsive Image

FAQ's

What kind of Artificial Intelligence solutions can I expect to find or use at Keralty?

At Keralty, you can expect to find and use various AI solutions designed to optimize processes, improve services, and support informed decision-making, such as Gemini for Workspace tools. These solutions are developed or acquired following strict quality, security, and ethical guidelines.

Are there policies that regulate the use of AI at Keralty?

Yes, Keralty has a robust regulatory framework for the use of AI. This includes the 'Corporate Policy on AI Governance at Keralty' (SIG-GD-CKE-PL09), the 'Corporate Policy on Ethics and Responsible Use of AI at Keralty' (SIG-GD-CKE-PL10), the 'Corporate Policy on Information Security for AI' (SIG-GD-CKE-PL11), and the 'Corporate Policy on Innovation and AI Solutions Management' (SIG-GD-CKE-PL12).

What ethical principles does Keralty apply when developing and using AI?

Keralty adheres to fundamental ethical principles in the use of AI: Transparency, Fairness and Non-Discrimination, Privacy and Data Protection, Accountability, Safety and Reliability, and Positive Social Impact. This ensures that our solutions are fair, understandable, and beneficial to society.

How does Keralty ensure information security when using AI?

Information security is a top priority. Keralty integrates security and privacy from the design stage of AI systems, applying the principles of least privilege and need-to-know. Risk assessments are conducted before using any AI system, and data protection policies, applicable laws, and standards such as ISO/IEC TR 24028:2021 and ISO 42001:2023 are followed. Additionally, security events and incidents are continuously monitored and managed.

Can Keralty’s AI systems have biases? How are they addressed?

Keralty is committed to mitigating bias in AI models. Bias and fairness testing is mandatory at all stages of AI model development or operation (design, data collection, implementation, evaluation). Models are adjusted and reviewed to prevent discriminatory or unfair outcomes, ensuring fairness and non-discrimination.

Will I be informed how an AI system makes a decision that affects me?

Yes. Keralty promotes Algorithmic Transparency, which is the ability to explain and understand how an AI system makes decisions. Processes, algorithms, and decisions related to AI must be clearly and accessibly documented so that stakeholders can understand how automated decisions are made.

Who is responsible if an AI system makes a mistake or has a negative impact?

Responsibility for AI-related decision-making is clearly assigned to the leaders of each area. There is an AI governance committee responsible for ensuring compliance with policies and making key decisions. Keralty takes responsibility for the outcomes generated by AI systems and has mechanisms in place to address potential errors or negative impacts.

How does Keralty promote innovation in the use of AI?

Keralty seeks to promote a culture of continuous innovation and centralized management of AI tools. This includes ensuring access to relevant data and technological resources, establishing controlled environments for proof of concept, fostering ongoing staff training, adopting iterative and agile approaches, and encouraging multidisciplinary and interinstitutional collaboration.

Are my personal data safe when used by AI systems at Keralty?

Yes. Keralty strictly adheres to Privacy and Data Protection principles. Data used to train and operate AI models is collected, processed, and stored responsibly, ensuring quality, accuracy, security, and confidentiality, while fully complying with data protection regulations (such as Law 1581 of 2012 in Colombia).

Who oversees the development and use of AI at Keralty?

The development and use of AI are overseen by various roles and committees. The AI Program Oversight Committee supervises policy implementation and makes strategic decisions. The Corporate Information Security Management and the Data & Analytics Manager hold key responsibilities for ensuring the security, quality, and responsible use of AI.