App Development

AI by and for People: What We Learned at GooApps® in 2025

More than 1,000 hours of training, research, and development to apply artificial intelligence with judgment, ethics, and measurable results.

“AI can accelerate processes, but what truly matters is understanding what it does, why it does it, and for whom it does it.”

Introduction

In 2025, at GooApps®, we decided to take artificial intelligence applied to digital products seriously. Not as a trend. Not as a shortcut to produce more without thinking. But as an opportunity to build better-designed, more useful, and more human technology.

Throughout the year, we invested over 1,000 hours in training, research, and hands-on AI development. The objective was clear: learn how to apply AI with ethics, judgment, and responsibility—understanding every part of the process and its impact on the people who ultimately use our products.

This article summarizes that journey: what we learned, how we structured it, and what truly changed in the way we design, develop, and ensure quality.

Why we decided to train in AI (and why using it without understanding it is not enough)

AI has become a daily tool: assistants, search engines, health apps, sports platforms, productivity tools. It is also transforming software development.

But the more accessible it becomes, the easier it is to make a dangerous mistake: using AI without understanding it.

At GooApps®, we started from a simple principle: AI can make us more efficient. But it can also introduce errors, bias, or opaque decisions if not properly validated.

The core idea that guided the entire plan was this:

AI is not magic. If we use it, we must be able to explain it, test it, and take responsibility for its outcomes.

How we approached it at GooApps®: applied training, not accumulated theory

Learning AI does not mean collecting concepts. It means turning knowledge into real improvements. That is why we combined training, research, and applied development, ensuring that every learning translated into concrete changes in our daily work.

The plan was cross-functional and involved three key areas:

  • Management (PM/PO): product definition, prioritization, clarity of requirements, and informed decision-making
  • Development: technical integration of AI where it adds value, improved architecture, and code quality
  • QA: reinforced validation, traceability, and controlled use of AI in testing and processes

Throughout the process, we maintained a constant principle: AI must be understandable, verifiable, and accountable. If we cannot explain what is happening or how it is validated, it is not progress. It is risk.

What we learned: three levels, one shared intention

We structured the plan into three levels. Not as a hierarchy of knowledge, but as an adaptation to different responsibilities within the team.

Foundational competencies (entire team)

Everyone needed to understand what AI is, what it can do, and what its risks are.

Key learnings:

  • General AI understanding: concepts, limitations, real-world applications
  • Ethics and responsibility: bias, privacy, and risk awareness
  • Generative AI tools for productivity
  • Automation of repetitive tasks (documentation, summarization, information organization)

This changed something essential: AI stopped being “a specialist topic” and became a shared language across the team.

Intermediate competencies (mid-level dev, QA, product)

The focus here was integrating AI into digital products safely and meaningfully.

Key learnings:

  • Natural Language Processing (NLP): chatbots and text analysis
  • Computer Vision applied to health, wellness, and sports
  • AI API integration: connection, measurement, and control
  • AI-supported UX/UI: prototyping and design consistency
  • AI in testing and QA: automation and improved coverage

We learned something critical: AI does not replace team judgment. It amplifies it—or distorts it—depending on how it is used.

Advanced competencies (technical specialization)

At the advanced level, we strengthened internal capacity to lead demanding AI projects.

Key learnings:

  • Developing proprietary models (when it makes sense and when it does not)
  • MLOps, deployment, and production observability
  • Deep Learning and Transformers
  • AI in digital health and wellness
  • Mobile optimization
  • Cybersecurity in AI-enabled environments
  • Strategic integration of AI into product roadmaps without unrealistic promises

The outcome was not just technical capability, but strategic responsibility.

Ethics and responsibility: our foundation for meaningful AI

If one principle defined this plan, it was a clear conviction: AI must be applied ethically and responsibly.

That means understanding each stage:

  • Data
  • Instructions or prompts
  • Model
  • Output
  • Validation
  • Impact on the individual

At GooApps®, we are committed to ensuring that AI:

  • Respects privacy and security
  • Is explainable in practical terms
  • Does not introduce unfair bias
  • Includes human oversight when necessary
  • Delivers real, measurable value

Internal validation framework before applying AI

Key QuestionWhy It Matters
What human problem are we solving?Prevents using AI as a trend
What data is involved and what privacy risks exist?Protects the end user
What can go wrong (errors, bias, false certainty)?Anticipates negative impact
How do we validate it (QA, metrics, testing, review)?Reinforces quality and control
What control does the user have and how do they understand it?Ensures transparency
What do we do if the model fails?Maintains operational responsibility
Can we clearly explain the result?Enables accountability

This checklist is not theoretical. It is a real tool we use before integrating AI into any product.

Eight voices, eight lessons

To prevent this plan from becoming just a strategic document, we introduced a cross-functional internal reflection exercise.

Internal instructions:
Each person contributed an 80–120 word text following this structure:

  • “The most valuable thing I learned…”
  • “A practical example where it helped me…”
  • “A precaution I am now much more aware of…”

This exercise grounded training in real experiences across development, product, and QA, reinforcing a shared culture of responsibility in AI usage.

What changes for our clients

Training in AI was not an isolated internal exercise. It was a direct investment in improving what we deliver.

In practice, this translates into:

  • Greater clarity in feature definition
  • Increased speed with control, automating repetitive tasks without losing traceability
  • Higher quality and fewer surprises, thanks to reinforced QA
  • Better user experience through faster prototyping and validation
  • More responsible decision-making with clear boundaries and explainable outcomes

Well-applied AI should not feel like a trend. It should feel like a better experience.

Strategic objectives of the plan

This was not a one-off initiative. It was a strategic commitment to making AI a real internal capability at GooApps®.

We aimed to:

  • Build a shared internal AI culture
  • Optimize productivity without losing judgment
  • Enable practical, measurable AI integration
  • Drive innovation in digital health, wellness, and MedTech
  • Consolidate a specialized core team capable of leading advanced AI projects
  • Contribute value within the international AI ecosystem

In short: learn AI in order to build more useful technology for people—with greater judgment and control.

What comes in 2026

In 2026, we will continue with the same approach: learn, apply, measure, and improve.

Our commitment is not to “do AI.”
It is to build useful, safe, human-centered technology—using AI when it truly adds value.

AI may change how we work, but it should never change why we work. We continue building by and for people.

Frequently Asked Questions

Why is AI training important before applying it to digital products?

Because using AI without understanding its risks can introduce bias, errors, or opaque decisions. Training enables responsible, validated, and accountable implementation.

What does it mean for AI to be explainable?

It means understanding which data was used, which rules were applied, and how the result was validated. If it cannot be explained, accountability is impossible.

How can AI be integrated without compromising software quality?

Through QA validation, measurable metrics, human review, and defined controls before deployment.

What does cross-functional AI training bring to a team?

It creates a shared language, improves collaboration across roles, and reduces reliance on isolated profiles.

Does AI replace the human team?

No. It amplifies capabilities, but judgment, validation, and responsibility remain human.

Take the next step

Complete the form and GooApps® will help you find the best solution for your organization. We will contact you very soon!





    Contact


      Privacy policy.