midPoint Connector Development & Integration Engineering
Connect midPoint to your HR systems, directories, cloud platforms, legacy applications, and custom databases. We build, test, and deliver production-ready connectors tailored to your integration architecture.
Have a specific integration challenge?
Why Connector Strategy Matters
Connectors are the operational spine of any midPoint deployment. They synchronize identity data between midPoint and your source systems (HR, directories, cloud platforms, legacy applications), automate provisioning and deprovisioning, and maintain real-time reconciliation. Without well-designed connectors, midPoint cannot deliver on its core promise: automated, compliant, maintainable identity lifecycle management.
A poor connector strategy costs you in four ways:
- Operational fragility: Brittle, tightly-coupled connectors break when upstream systems change, creating manual workarounds and audit exposure.
- Slow provisioning: Inefficient connector logic delays user onboarding and causes manual remediation.
- Reconciliation drift: Connector errors that go undetected cause identity data corruption and compliance violations.
- High maintenance cost: Connectors without proper documentation, error handling, and monitoring become tech debt.
WKI approaches connector development as an architectural discipline, not a checkbox task. We design for resilience, operability, and long-term maintainability.
Integration Types We Handle
HR Systems
SAP HCM (SuccessFactors, ECC): We build connectors that extract employee data from SAP HCM, Employee Central, or ECC via OData APIs or RFC. Typical scope includes employee master data sync, organizational hierarchy, cost center mapping, and manager relationships.
Generic JDBC / SQL: For proprietary or legacy HR systems accessible via database connections, we design JDBC-based connectors with robust error handling, transaction isolation, and change-data-capture patterns where available. We handle both full sync and incremental update scenarios.
Flat File & CSV Import: Many organizations start with CSV or delimited-file feeds from HR systems. We build file-based connectors that handle encoding, delimiter variance, validation, and scheduled ingestion patterns, often with pre-processing and deduplication logic.
Directory Services
Active Directory & Azure AD / Entra ID: We develop connectors for bidirectional AD sync (user creation, group membership, attribute sync) and Azure AD / Microsoft Entra ID integration via Microsoft Graph API. These connectors handle schema mapping, password provisioning, license management, and cloud-to-on-premises sync patterns.
OpenLDAP & Generic LDAP: For OpenLDAP, eDirectory, and other LDAP-compliant directories, we build connectors with robust schema handling, entry DN construction, group membership provisioning, and nested group support. We optimize for both sync performance and operation atomicity.
Cloud & SaaS Platforms
SCIM 2.0 Provisioning: We build SCIM 2.0 connectors for cloud platforms (Slack, Okta, ServiceNow, Atlassian, etc.) that support the System for Cross-domain Identity Management standard. Scope includes user provisioning, deprovisioning, group membership, and attribute mapping.
REST API Integration: Many SaaS platforms expose REST/JSON APIs for user management. We engineer connectors that handle OAuth 2.0 / API key authentication, pagination, error retry logic, rate limiting, and schema mapping for platforms like Salesforce, HubSpot, Zendesk, and others.
Google Workspace & Microsoft 365: We develop connectors that use Google Admin API and Microsoft Graph API to manage user accounts, group memberships, and resource provisioning in cloud identity platforms.
Legacy & Custom Applications
SOAP Web Services: Older enterprise systems often expose SOAP-based web service APIs. We develop connectors that call SOAP endpoints, manage WSDL schemas, handle authentication, and map identity operations to complex SOAP request/response patterns.
Custom Database Schemas: Many organizations have proprietary database schemas for user management. We engineer JDBC connectors that understand your specific schema, implement change-data-capture or polling patterns, and ensure transaction safety for provisioning operations.
Proprietary APIs & Legacy Protocols: If your application exposes a custom API or protocol (XML-RPC, FTP-based feeds, custom REST dialect), we can design and build a connector that bridges midPoint to that system safely.
File-Based & Batch Provisioning
CSV & Delimited Formats: We build connectors that ingest batch identity feeds (user creates, updates, terminations) from scheduled file drops. Typical patterns include validation, duplicate detection, pre-processing, and queuing into midPoint’s provisioning logic.
Fixed-Width & Structured Formats: For legacy systems that output fixed-width or custom-structured files, we engineer parsers and loaders that reliably extract identity data and feed it into midPoint’s reconciliation and provisioning engine.
Build vs. Buy: When to Use Existing Connectors vs. Custom Development
Start here: Evolveum publishes a library of connectors via the Polygon Connector Server and standard ConnId framework. Before we design a custom connector, we evaluate whether an existing connector meets your needs.
Use Existing Connector If:
- A published Polygon or ConnId connector exists for your target system
- The connector’s features (read, create, update, delete, schema) match your functional requirements
- The connector is actively maintained or stable
- Authentication method (basic auth, OAuth, API key) is supported
- You can map your attributes without custom business logic
Outcome: Lower cost, shorter deployment, vendor support if available.
Develop Custom Connector If:
- No published connector exists for your system
- Existing connectors lack critical features (e.g., group provisioning, password sync)
- Your system uses a proprietary or non-standard API
- Integration requires complex business logic or transformation
- You need connector-level error handling and retry patterns specific to your environment
Outcome: Tailored to your architecture, full ownership, investment in documentation and training.
We maintain a curated list of existing Polygon and ConnId connectors and their capability matrices. During discovery, we assess your target system, recommend whether to build or adapt an existing connector, and scope the effort accordingly.
WKI’s Connector Development Approach
We follow a disciplined methodology that ensures connectors are resilient, maintainable, and aligned with your midPoint architecture.
What’s Included in a Connector Development Engagement
Technical Deliverables
- ConnId-compliant connector code (Groovy or Java)
- Schema definition (object types, attributes, operations)
- Configuration templates
- Unit and integration tests
- Error handling and logging framework
- Source code repository with version control
- Connector JAR / bundle for deployment
Documentation & Knowledge
- Connector design and architecture document
- Configuration and deployment guide
- Operations runbook (monitoring, troubleshooting, common issues)
- API integration reference
- Code comments and inline documentation
- Training session(s) with your team
- Handoff and support transition plan
Scope varies: A straightforward REST API connector with basic CRUD operations typically takes 4–8 weeks. Complex integrations with custom business logic, multiple data sources, or heavy transformation requirements can take 12–16 weeks. We provide a detailed estimate after discovery.
Let’s Design Your Integration Architecture
Whether you need a custom connector, integration strategy advice, or Polygon connector evaluation, we can help you move forward.
Connector Maintenance & Long-Term Support
After deployment, connectors need ongoing care. Target system APIs evolve, midPoint versions change, and operational patterns shift. We offer several support models:
Break/Fix Support
You contact us when issues arise. We investigate, patch, test, and redeploy the connector. Typical response and resolution SLAs available.
Retainer / Advisory Support
Monthly engagement covering connector health checks, minor updates, target system API changes, and architectural guidance. Ideal if you have multiple connectors or complex integrations.
Training Your Team
If you want your internal team to maintain the connector, we provide deep technical training and mentoring. You own the code and can extend it independently.
Managed Service
We monitor your connectors, handle all updates and troubleshooting, and manage connector health as part of your broader midPoint operations.
Connector versioning: midPoint releases new versions regularly. We track breaking changes in the ConnId framework and advise you on connector compatibility. We can retest and update connectors for new midPoint versions to maintain operational continuity.
Frequently Asked Questions
What connector framework does midPoint use?
midPoint uses the ConnId (Connector Identity) framework, an open-source library that standardizes connector development. Connectors are written in Java or Groovy and deployed as JAR bundles to midPoint’s connector server. Evolveum also manages the Polygon Connector Server, which is a centralized, cloud-hosted service for managing and deploying connectors across many midPoint instances.
All custom connectors we build are ConnId-compliant and can be deployed either locally or via Polygon.
How long does it typically take to develop a custom connector?
It depends on complexity. A straightforward REST API connector with basic create/read/update/delete operations typically takes 4–8 weeks. More complex integrations with custom business logic, multiple object types, or heavy transformation logic can take 12–16 weeks. We scope the effort precisely after discovery, including design review time, testing, documentation, and knowledge transfer.
Can we adapt an existing Polygon connector instead of building from scratch?
Yes. If a Polygon or community connector exists for your target system, we can evaluate it against your requirements. If it covers 80–90% of your use case, we can often extend or customize it. This is faster and cheaper than building from scratch and reduces long-term maintenance burden.
If an existing connector lacks critical features or has architectural issues, we may recommend building a new connector or replacing it entirely. We’ll make an honest assessment during discovery.
How do we handle SCIM 2.0 provisioning to cloud SaaS platforms?
SCIM 2.0 is a standardized protocol for identity provisioning. We build connectors that implement SCIM 2.0 client operations (POST/PATCH/DELETE) to send user and group information to SaaS platforms. We handle OAuth 2.0 authentication, schema mapping, and error retry logic.
Many platforms (Slack, Okta, ServiceNow, Atlassian) support SCIM. We verify the platform’s SCIM implementation and build accordingly. We also handle both push (midPoint → SaaS) and pull (SaaS → midPoint) patterns when needed.
What happens when midPoint releases a new version? Do our connectors still work?
Most midPoint updates maintain backward compatibility for connectors. However, major version releases (e.g., 4.5 → 4.6) can introduce breaking changes in the ConnId framework or midPoint’s connector API.
We track midPoint release notes and test connectors against new versions. If updates are needed, we handle the recompilation, testing, and deployment. This is typically a low-effort change if the connector was well-documented and designed. Including connector maintenance in a retainer support plan ensures you’re always compatible with your midPoint version.
How do you handle error scenarios and failed provisioning operations?
Robust error handling is critical. We design connectors with:
- Retry logic with exponential backoff for transient failures
- Clear error logging to identify root causes
- Graceful degradation (e.g., fail-safe auth failures)
- Rate-limit awareness (honoring API throttling)
- Timeout and circuit-breaker patterns for resilience
We also work with your operations team to define alerting and manual remediation procedures for unrecoverable failures. The connector should surface enough information for your team to diagnose and fix issues quickly.
Can connectors handle complex attribute transformations and business logic?
Yes. Connectors can implement custom business logic (data transformation, validation, enrichment) as part of the read/create/update operations. Common examples:
- Deriving email from first/last name
- Mapping department codes to organizational units
- Building group memberships from HR cost centers
- Concatenating or formatting attributes
- Conditional logic (e.g., only sync active employees)
That said, we prefer to keep connectors focused on read/write operations and push complex business logic to midPoint’s mapping and workflow layers where it’s more maintainable and reusable. The balance depends on your architecture.
What about bidirectional sync and reconciliation?
Connectors enable both directions of sync:
- Inbound (live sync / reconciliation): midPoint reads identity data from the target system and updates its repository.
- Outbound (provisioning): midPoint writes identity data to the target system when changes occur in midPoint.
We design connectors to support both patterns. Reconciliation requires the connector to enumerate all objects in the target system so midPoint can detect creations, updates, and deletions. Provisioning requires the connector to implement create, update, and delete operations. We scope the connector to cover the operations you actually need.
Can we deploy connectors to the Polygon Connector Server (cloud) or do they have to be on-premises?
Connectors can be deployed either way:
- On-premises: Deploy the connector JAR to your local midPoint server. Good for systems with restricted network access or sensitive data.
- Polygon (cloud): Use Evolveum’s managed Polygon Connector Server. Simpler ops, centralized management, but the connector code runs in Evolveum’s infrastructure.
We help you choose based on your security posture, compliance requirements, and operational preferences.
What does connector testing look like? What happens in staging?
Testing happens in stages:
- Unit tests: We test connector code in isolation (mocked target system).
- Staging integration tests: We test against your actual target system in a staging or sandbox environment, if available. We test all operations: schema fetch, list/read, create, update, delete, and reconciliation.
- End-to-end midPoint tests: We test provisioning and reconciliation workflows in a staging midPoint instance to ensure the connector integrates properly with midPoint’s provisioning engine, resource definitions, and mappings.
- Production pilot: We may run a limited production pilot (e.g., test user creation) before full rollout.
We deliver a test plan and test results documentation before deployment.
Ready to Build Your Connectors?
Whether you need a single custom connector, integration architecture review, or a multi-connector strategy for your midPoint deployment, we can help. Let’s discuss your target systems, integration requirements, and timeline.
Related Resources
Our approach to designing and deploying enterprise midPoint instances.
How connectors enable joiner/mover/leaver automation and compliance.
Migration strategy and connector requirements for moving from older platforms.
Get in touch to discuss your integration and architecture needs.
For Decision-Makers
Your identity platform is only as effective as the systems it connects to. Gaps in connector coverage mean manual provisioning, orphaned accounts, and audit findings. We build production-ready midPoint connectors for HR systems, directories, cloud platforms, legacy applications, and custom databases — using a disciplined engineering approach with full documentation, testing, and knowledge transfer. Every connector is built to the ConnId framework standard, designed for long-term maintainability, and handed over with complete operational documentation.

