Is your feature request related to a problem? Please describe. A few days ago our Git went down, and as a result, our prod was not able to scale UP during dynamic traffic growth as services couldn't start because they were not able to retrieve config, and whole prod went down

Describe the solution you'd like As a solution, we had to write custom code to intercept connection exceptions:

 @AfterThrowing("execution(* org.springframework.cloud.config.server.environment.JGitEnvironmentRepository.*(..))")

and use S3 bucket as failover

Describe alternatives you've considered Git connection exception interception is not the best approach, would be much nicer to have a Git failover option configurable through YAML config

spring:
  cloud:
    config:
      server:
        git:
          uri: https://example.com/my/repo
          timeout: 4
          failover: awss3
        awss3:
          region: us-east-1
          bucket: bucket1    

but in such a way that awss3 configuration would not be triggered unless git connection failed (other way it will download 2 sets of the same files from git and from s3 - the same files)

Additional context Our solution:

@Value("${spring.cloud.config.server.awss3.bucket:''}")
private String bucketName;
@Value("${spring.cloud.config.server.git.basedir:'/tmp/config-service/config-repo'}")
private String baseDirPath;

@AfterThrowing("execution(* org.springframework.cloud.config.server.environment.JGitEnvironmentRepository.*(..))")
    public void handleGitConnectionException() {
        log.warn("Failed to clone Git project, fallback to S3");
        try {
            ObjectListing objects = s3Client.listObjects(bucketName);
            for (S3ObjectSummary item : objects.getObjectSummaries()) {
                log.info("Downloading file: " + item.getKey());
                File newFile = new File(baseDirPath + item.getKey());
                ObjectMetadata object = s3Client.getObject(new GetObjectRequest(bucketName, item.getKey()), newFile);
                if (object != null) {
                    log.info("File: " + item.getKey() + " was download successfully");
                }
            }
        } catch (Exception e) {
            log.error("Failed to download config files from S3", e);
        }
    }

Comment From: ryanjbaxter

Have you considered using a composite? I think this issue is very similar to https://github.com/spring-cloud/spring-cloud-config/issues/1928

Comment From: Bryksin

hmmm yes, it has something in common, though our Config Service is run in EKS, so it is a clean docker image without any locals. If git fails when config service already running (means it was pulled at least once) - local copies are there for serving, but if Config Service attempt to start when git is already down and not available, we need to pull those configs from somewhere (in our case it is S3 Bucket) The problem is with complexity in intercepting Exception and all that custom coding to solve the issue, where I'm proposing to introduce an official and nicely done fallback path in case if the primary source of configs is not available for some reason.

Comment From: ryanjbaxter

I'm not necessarily saying you need to use a local fallback rather use a composite configuration with Git and S3 config so the S3 config is tried when the git config fails

Comment From: Bryksin

so the S3 config is tried when the git config fails

That is the intention, and we wrote custom code (provided above) for this but right now if I will define composite 2 sources (git and S3) they both will be active and both will be reading periodically. Unless I missed something in the documentation and there is already functionality to define primary and secondary sources, where secondary will be triggered only when first fail?

This ticket is exactly about this, nice ability to define primary and secondary sources where the secondary source will not be active until 1st failed.

You see in our case we commit configs into the git repo, and this push event is hooked up to Jenkins which execute the pipeline and upload all files to S3 as backup If 2 sources will be active at the same time there might be a problem when changes went into the git but the pipeline did not complete yet - so 2 sources will provide a different set of the same files and which of them will be served by the config server is a big question, so composite when 2 sources are active at the same time and serving the same files seems like not the best approach. it should be primary and secondary and secondary should be activated only when 1st failed

Comment From: Bryksin

Sorry, I just went through the documentation and found the definition of "composite" you are right there is already such features as primary and secondary and so on! it is just our config server outdated

So I guess this ticket is not relevant anymore and can be closed! extremely sorry for wasting your time

Comment From: ryanjbaxter

Dont be sorry at all!