Why?
Recently I have decided to make a backup of all my repositories on BitBucket and it is OK if you have up to 10, but if more than 100 then it is a pain.
How?
I used my Synology NAS as a backup storage, which is on Linux but still does not have all the capabilities and freedom as Ubuntu or Debian, so I tried to use what I have (from the software perspective) and do not install anything else (for one time use). Anyway, I have had to add Syno Community as a package source, and then I installed SynoCli
File Tools and Git
, this is the minimum requirement, the other tools (such as jq
) are coming from SynoCli
File Tools.
Bash Script
So I created a bash script which was basically retrieving the list of the repositories of the given team or user in JSON format, stored to the local folder, and then parsing the list and go 1 by 1 and clone the repos to the local folder. BitBucket API has a limitation of the number of the repositories rendered on one (single) page, it is max 100, but it gives you an idea that there is another (next) page, so you can parse it as well and loop up to max pages you need.
#!/bin/bash
USER="<your_bitbucket_user>"
SECRET="<your_bitbucket_secret_key>"
TEAM="<user_or_team>"
# you can uncomment this line if you need to create a folder for your team/user
# rm -rf "$TEAM" && mkdir "$TEAM" && cd $TEAM
NEXT_URL="https://api.bitbucket.org/2.0/repositories/${TEAM}?pagelen=100"
while [ ! -z $NEXT_URL ]
do
echo "Processing ${NEXT_URL}..."
curl -u "${USER}:${SECRET}" -H "Content-Type: application/json" $NEXT_URL > repoinfo.json
jq -r '.values[] | .links.clone[0].href' repoinfo.json > repos.txt
NEXT_URL=`jq -r '.next' repoinfo.json`
for repo in `cat repos.txt`
do
echo "Cloning ${repo}..."
# I'm using HTTP links
git clone $repo
done
done
You need to replace all in <…> by your values, hope you understand it 🙂
That’s basically it. Don’t forget to grant the right execution permission to your bash script.
Leave a Reply
You must be logged in to post a comment.