Hello Thierry Tropée!
Thanks for posting your question on Microsoft QnA!
You're using an ExecutePipeline activity in Azure Data Factory to run a child pipeline. But when that child pipeline takes more than an hour to complete, it fails with a timeout error. You tried to manually set a longer timeout
in the activity JSON, but it didn’t work — and you're wondering why.
The key thing to understand is: ExecutePipeline doesn't have its own timeout setting.
Instead, it inherits the timeout from the child pipeline itself. That timeout is usually set to 1 hour by default. So even if you try to set a timeout
in the ExecutePipeline activity JSON, it’ll be ignored — because the activity doesn’t support it directly.
For example:
This is my Parent ExecutePipeline Activity JSON which does not have any timeout property in it.
Similarly, If I check the copy activity inside this execute pipeline which would be the child of this pipeline, I can see the timeout property there in its JSON.
Here's is what you can follow
- Go to Author in ADF Studio.
- Open the child pipeline.
- Click on the canvas background (not an activity).
- On the right, you’ll see pipeline properties.
- Under General → Timeout, set it to something longer like
04:00:00
or12:00:00
. - Click Save and then Publish.
Monitor the Change
- After publishing, run your parent pipeline again.
- Go to the Monitor tab and track the
ExecutePipeline
activity. - It should now wait for the full timeout duration you configured in the child pipeline.
- If it still fails at 1 hour, double-check:
- You updated the correct pipeline.
- You published the change.
- There are no other activities inside the child pipeline (like Copy, Databricks, etc.) that are hitting their own timeouts.
Please check this link for understanding. (From MS documentation page)
Kindly "Accept the Answer" If I was able to resolve your issue.
Thanks
Pratyush