Skip to main content

Cline

This guide explains how to configure Cline to work with the PwC AI-CoE Coding Agents Gateway.

Overview

Cline is a VS Code extension that supports multiple LLM providers. This guide covers configuration using the OpenAI-compatible API endpoint.

Base URL Configuration

In Cline's API Configuration panel, set the Base URL to:

https://idi-coding-agents.pwc.it

Cline automatically appends /chat/completions to this URL when making requests.

Network Requirements

The endpoint https://idi-coding-agents.pwc.it is only accessible via PwC VPN set to Italy or Italy On-Premise.

An alternative endpoint (https://doc-interact-backend.pwc.it) is available on the public internet and can also be used as the Base URL, but is not recommended as it's protected by the Imperva firewall, which may block frequent requests from coding agents, interpreting them as DDoS attempts.

API Key Configuration

In the OpenAI Compatible API Key field, enter your API key obtained as described in Get Access.

note

This field is not actually used by the PwC AI-CoE Coding Agents Gateway for authentication. The API key is retrieved from the Custom Headers instead.

Custom Headers

In the Custom Headers section, add the following entries:

HeaderValue
api-keyYour API key (see Get Access)
tenant-idYour tenant ID (see Get Access)
model-nameModel identifier — accepts any of the three formats: vendor naming convention (e.g., claude-sonnet-4-6), Coding Agents Gateway naming convention (e.g., GENAI_SHARED_VERTEXAI_ANTHROPIC_CLAUDE_46_SONNET), or PwC GenAI Shared Service naming convention (e.g., vertex_ai.anthropic.claude-sonnet-4-6). See Available Models for all values.
auth-typeapi-key

Model Configuration

Important: Set Max Output Tokens

Cline does not auto-detect model limits. You must manually set Max Output Tokens (and Context Window Size) in the MODEL CONFIGURATION section, otherwise Cline will use incorrect defaults and may truncate responses or fail.

In the MODEL CONFIGURATION section, set the following parameters to match your chosen model:

ParameterValue
Context Window SizeValue from the Available Models table
Max Output TokensValue from the Available Models table

Optional: Cost Tracking

To monitor conversation costs, you can also configure:

ParameterValue
Input Price / 1M TokensValue from the Available Models table
Output Price / 1M TokensValue from the Available Models table
tip

Leave the Temperature parameter at its default value for optimal results.