Bitbucket pipelines .net deploy - c#

I am trying to use new tool pipelines from bitbucket. I have multiple .NET console application (not Core app) than i want to build and download. I found this page that says that I can use mono to build my project. I use this .yml file to build it.
image: mono
pipelines:
default:
- step:
script:
- nuget restore
- MONO_IOMAP=case xbuild /t:Build /p:Configuration="Release" /p:Platform="Any CPU" Solution.sln
Build was successful but now I am stuck with downloading my app (exe with all dlls). I found here that I can use bitbucket downloads. But how to download my deploy folder? Here I found that I can zip some files and then put it to bitbucket downloads. But how I can use this with mono, and how I can zip a whole folder, and then download it? I don't mind to use something else like mono.

Mono is built on debian:wheezy, any linux commands you run in the script portion of the YML file can help you extract the file before BitBucket Pipelines pulls down the container. In the example you included there is a POST command at the end which deploys the artefact to Downloads in Bitbucket.
curl -v -u $BB_ACCESS -X POST https://api.bitbucket.org/2.0/repositories/$BITBUCKET_REPO_OWNER/$BITBUCKET_REPO_SLUG/downloads/ -F files=#aqua_lambda.zip
It explains the environment variables $BB_ACCESS further down, others are loaded in at runtime for you.
You'll need to find the file path mono compiles to and adjust the example's code to push to Bitbucket Downloads, or Amazon s3 is a good option too.

A bit late for this answer...
Firstly, use msbuild instead of xbuild as xbuild is deprecated.
Now, what you want is a successful build as well push the release to bitbucket downloads.
Here is how you do it:
1. Create an App password for the repository owner
Log in to Bitbucket as the repository owner (also the user who will upload the files) and go to Bitbucket Settings > App Passwords.
Create a new app password with write permissions to your repositories, and take note of the generated password that pops up. The name of the password is only for your reference, so use "Pipelines" or any other name you like.
You should now have two values that you will need for the next step.
<username>: Bitbucket username of the repository owner (and also the user who will upload the artifacts)
<password>: App password as generated by bitbucket
2. Create a Pipelines environment variable with the authentication token
Define a new secure environment variable in your Pipelines settings:
Parameter name: BB_AUTH_STRING
Parameter value: <username>:<password> (using the values from step 1)
You can define this environment variable either in the repository settings, or in the settings for the account that owns the repository.
The example below shows the settings for an individual account environment variable, where the repository is owned by an individual. (Note that in the case where a team owns the repository, you must configure environment variables in the team settings for them to be visible in Pipelines.)
3. Enable your Pipeline to deploy your artifacts using curl and the Bitbucket REST API
First add line to zip your release dir:
- zip -j bin/Release.zip bin/Release/
It may be possible that the pipeline bash does not have zip installed. To install zip, add the following lines to your pipeline before the above command:
- apt-get update
- apt-get -y install zip
Now finally add the curl command that uses Bitbucket REST API:
- curl -X POST --user "${BB_AUTH_STRING}" "https://api.bitbucket.org/2.0/repositories/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}/downloads" --form files=#"bin/Release.zip"
If you wish, you can remove the unneeded release zip file from the bin dir as the zip is now already in Bitbucket downloads:
- rm -f bin/Release.zip
Here is the full pipeline.yml:
image: mono
pipelines:
default:
- step:
script:
- nuget restore
- MONO_IOMAP=case msbuild /p:Configuration="Release" /p:Platform="AnyCPU" Solution.sln
- apt-get update
- apt-get -y install zip
- zip -r bin/Release.zip bin/Release/
- curl -X POST --user "${BB_AUTH_STRING}" "https://api.bitbucket.org/2.0/repositories/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}/downloads" --form files=#"bin/Release.zip"
- rm -f bin/Release.zip
Note that your release directory may differ from the sample above

Related

Referencing a parent file from a dockerfile [duplicate]

How can I include files from outside of Docker's build context using the "ADD" command in the Docker file?
From the Docker documentation:
The path must be inside the context of the build; you cannot ADD
../something/something, because the first step of a docker build is to
send the context directory (and subdirectories) to the docker daemon.
I do not want to restructure my whole project just to accommodate Docker in this matter. I want to keep all my Docker files in the same sub-directory.
Also, it appears Docker does not yet (and may not ever) support symlinks: Dockerfile ADD command does not follow symlinks on host #1676.
The only other thing I can think of is to include a pre-build step to copy the files into the Docker build context (and configure my version control to ignore those files). Is there a better workaround for than that?
The best way to work around this is to specify the Dockerfile independently of the build context, using -f.
For instance, this command will give the ADD command access to anything in your current directory.
docker build -f docker-files/Dockerfile .
Update: Docker now allows having the Dockerfile outside the build context (fixed in 18.03.0-ce). So you can also do something like
docker build -f ../Dockerfile .
I often find myself utilizing the --build-arg option for this purpose. For example after putting the following in the Dockerfile:
ARG SSH_KEY
RUN echo "$SSH_KEY" > /root/.ssh/id_rsa
You can just do:
docker build -t some-app --build-arg SSH_KEY="$(cat ~/file/outside/build/context/id_rsa)" .
But note the following warning from the Docker documentation:
Warning: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command.
I spent a good time trying to figure out a good pattern and how to better explain what's going on with this feature support. I realized that the best way to explain it was as follows...
Dockerfile: Will only see files under its own relative path
Context: a place in "space" where the files you want to share and your Dockerfile will be copied to
So, with that said, here's an example of the Dockerfile that needs to reuse a file called start.sh
Dockerfile
It will always load from its relative path, having the current directory of itself as the local reference to the paths you specify.
COPY start.sh /runtime/start.sh
Files
Considering this idea, we can think of having multiple copies for the Dockerfiles building specific things, but they all need access to the start.sh.
./all-services/
/start.sh
/service-X/Dockerfile
/service-Y/Dockerfile
/service-Z/Dockerfile
./docker-compose.yaml
Considering this structure and the files above, here's a docker-compose.yml
docker-compose.yaml
In this example, your shared context directory is the runtime directory.
Same mental model here, think that all the files under this directory are moved over to the so-called context.
Similarly, just specify the Dockerfile that you want to copy to that same directory. You can specify that using dockerfile.
The directory where your main content is located is the actual context to be set.
The docker-compose.yml is as follows
version: "3.3"
services:
service-A
build:
context: ./all-service
dockerfile: ./service-A/Dockerfile
service-B
build:
context: ./all-service
dockerfile: ./service-B/Dockerfile
service-C
build:
context: ./all-service
dockerfile: ./service-C/Dockerfile
all-service is set as the context, the shared file start.sh is copied there as well the Dockerfile specified by each dockerfile.
Each gets to be built their own way, sharing the start file!
On Linux you can mount other directories instead of symlinking them
mount --bind olddir newdir
See https://superuser.com/questions/842642 for more details.
I don't know if something similar is available for other OSes.
I also tried using Samba to share a folder and remount it into the Docker context which worked as well.
If you read the discussion in the issue 2745 not only docker may never support symlinks they may never support adding files outside your context. Seems to be a design philosophy that files that go into docker build should explicitly be part of its context or be from a URL where it is presumably deployed too with a fixed version so that the build is repeatable with well known URLs or files shipped with the docker container.
I prefer to build from a version controlled source - ie docker build
-t stuff http://my.git.org/repo - otherwise I'm building from some random place with random files.
fundamentally, no.... -- SvenDowideit, Docker Inc
Just my opinion but I think you should restructure to separate out the code and docker repositories. That way the containers can be generic and pull in any version of the code at run time rather than build time.
Alternatively, use docker as your fundamental code deployment artifact and then you put the dockerfile in the root of the code repository. if you go this route probably makes sense to have a parent docker container for more general system level details and a child container for setup specific to your code.
I believe the simpler workaround would be to change the 'context' itself.
So, for example, instead of giving:
docker build -t hello-demo-app .
which sets the current directory as the context, let's say you wanted the parent directory as the context, just use:
docker build -t hello-demo-app ..
You can also create a tarball of what the image needs first and use that as your context.
https://docs.docker.com/engine/reference/commandline/build/#/tarball-contexts
This behavior is given by the context directory that the docker or podman uses to present the files to the build process.
A nice trick here is by changing the context dir during the building instruction to the full path of the directory, that you want to expose to the daemon.
e.g:
docker build -t imageName:tag -f /path/to/the/Dockerfile /mysrc/path
using /mysrc/path instead of .(current directory), you'll be using that directory as a context, so any files under it can be seen by the build process.
This example you'll be exposing the entire /mysrc/path tree to the docker daemon.
When using this with docker the user ID who triggered the build must have recursively read permissions to any single directory or file from the context dir.
This can be useful in cases where you have the /home/user/myCoolProject/Dockerfile but want to bring to this container build context, files that aren't in the same directory.
Here is an example of building using context dir, but this time using podman instead of docker.
Lets take as example, having inside your Dockerfile a COPY or ADDinstruction which is copying files from a directory outside of your project, like:
FROM myImage:tag
...
...
COPY /opt/externalFile ./
ADD /home/user/AnotherProject/anotherExternalFile ./
...
In order to build this, with a container file located in the /home/user/myCoolProject/Dockerfile, just do something like:
cd /home/user/myCoolProject
podman build -t imageName:tag -f Dockefile /
Some known use cases to change the context dir, is when using a container as a toolchain for building your souce code.
e.g:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile /tmp/mysrc
or it can be a path relative, like:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile ../../
Another example using this time a global path:
FROM myImage:tag
...
...
COPY externalFile ./
ADD AnotherProject ./
...
Notice that now the full global path for the COPY and ADD is omitted in the Dockerfile command layers.
In this case the contex dir must be relative to where the files are, if both externalFile and AnotherProject are in /opt directory then the context dir for building it must be:
podman build -t imageName:tag -f ./Dockerfile /opt
Note when using COPY or ADD with context dir in docker:
The docker daemon will try to "stream" all the files visible on the context dir tree to the daemon, which can slowdown the build. And requires the user to have recursively permission from the context dir.
This behavior can be more costly specially when using the build through the API. However,with podman the build happens instantaneously, without needing recursively permissions, that's because podman does not enumerate the entire context dir, and doesn't use a client/server architecture as well.
The build for such cases can be way more interesting to use podman instead of docker, when you face such issues using a different context dir.
Some references:
https://docs.docker.com/engine/reference/commandline/build/
https://docs.podman.io/en/latest/markdown/podman-build.1.html
As is described in this GitHub issue the build actually happens in /tmp/docker-12345, so a relative path like ../relative-add/some-file is relative to /tmp/docker-12345. It would thus search for /tmp/relative-add/some-file, which is also shown in the error message.*
It is not allowed to include files from outside the build directory, so this results in the "Forbidden path" message."
Using docker-compose, I accomplished this by creating a service that mounts the volumes that I need and committing the image of the container. Then, in the subsequent service, I rely on the previously committed image, which has all of the data stored at mounted locations. You will then have have to copy these files to their ultimate destination, as host mounted directories do not get committed when running a docker commit command
You don't have to use docker-compose to accomplish this, but it makes life a bit easier
# docker-compose.yml
version: '3'
services:
stage:
image: alpine
volumes:
- /host/machine/path:/tmp/container/path
command: bash -c "cp -r /tmp/container/path /final/container/path"
setup:
image: stage
# setup.sh
# Start "stage" service
docker-compose up stage
# Commit changes to an image named "stage"
docker commit $(docker-compose ps -q stage) stage
# Start setup service off of stage image
docker-compose up setup
Create a wrapper docker build shell script that grabs the file then calls docker build then removes the file.
a simple solution not mentioned anywhere here from my quick skim:
have a wrapper script called docker_build.sh
have it create tarballs, copy large files to the current working directory
call docker build
clean up the tarballs, large files, etc
this solution is good because (1.) it doesn't have the security hole from copying in your SSH private key (2.) another solution uses sudo bind so that has another security hole there because it requires root permission to do bind.
I think as of earlier this year a feature was added in buildx to do just this.
If you have dockerfile 1.4+ and buildx 0.8+ you can do something like this
docker buildx build --build-context othersource= ../something/something .
Then in your docker file you can use the from command to add the context
ADD –from=othersource . /stuff
See this related post https://www.docker.com/blog/dockerfiles-now-support-multiple-build-contexts/
Workaround with links:
ln path/to/file/outside/context/file_to_copy ./file_to_copy
On Dockerfile, simply:
COPY file_to_copy /path/to/file
I was personally confused by some answers, so decided to explain it simply.
You should pass the context, you have specified in Dockerfile, to docker when
want to create image.
I always select root of project as the context in Dockerfile.
so for example if you use COPY command like COPY . .
first dot(.) is the context and second dot(.) is container working directory
Assuming the context is project root, dot(.) , and code structure is like this
sample-project/
docker/
Dockerfile
If you want to build image
and your path (the path you run the docker build command) is /full-path/sample-project/,
you should do this
docker build -f docker/Dockerfile .
and if your path is /full-path/sample-project/docker/,
you should do this
docker build -f Dockerfile ../
An easy workaround might be to simply mount the volume (using the -v or --mount flag) to the container when you run it and access the files that way.
example:
docker run -v /path/to/file/on/host:/desired/path/to/file/in/container/ image_name
for more see: https://docs.docker.com/storage/volumes/
I had this same issue with a project and some data files that I wasn't able to move inside the repo context for HIPAA reasons. I ended up using 2 Dockerfiles. One builds the main application without the stuff I needed outside the container and publishes that to internal repo. Then a second dockerfile pulls that image and adds the data and creates a new image which is then deployed and never stored anywhere. Not ideal, but it worked for my purposes of keeping sensitive information out of the repo.
In my case, my Dockerfile is written like a template containing placeholders which I'm replacing with real value using my configuration file.
So I couldn't specify this file directly but pipe it into the docker build like this:
sed "s/%email_address%/$EMAIL_ADDRESS/;" ./Dockerfile | docker build -t katzda/bookings:latest . -f -;
But because of the pipe, the COPY command didn't work. But the above way solves it by -f - (explicitly saying file not provided). Doing only - without the -f flag, the context AND the Dockerfile are not provided which is a caveat.
How to share typescript code between two Dockerfiles
I had this same problem, but for sharing files between two typescript projects. Some of the other answers didn't work for me because I needed to preserve the relative import paths between the shared code. I solved it by organizing my code like this:
api/
Dockerfile
src/
models/
index.ts
frontend/
Dockerfile
src/
models/
index.ts
shared/
model1.ts
model2.ts
index.ts
.dockerignore
Note: After extracting the shared code into that top folder, I avoided needing to update the import paths because I updated api/models/index.ts and frontend/models/index.ts to export from shared: (eg export * from '../../../shared)
Since the build context is now one directory higher, I had to make a few additional changes:
Update the build command to use the new context:
docker build -f Dockerfile .. (two dots instead of one)
Use a single .dockerignore at the top level to exclude all node_modules. (eg **/node_modules/**)
Prefix the Dockerfile COPY commands with api/ or frontend/
Copy shared (in addition to api/src or frontend/src)
WORKDIR /usr/src/app
COPY api/package*.json ./ <---- Prefix with api/
RUN npm ci
COPY api/src api/ts*.json ./ <---- Prefix with api/
COPY shared usr/src/shared <---- ADDED
RUN npm run build
This was the easiest way I could send everything to docker, while preserving the relative import paths in both projects. The tricky (annoying) part was all the changes/consequences caused by the build context being up one directory.
One quick and dirty way is to set the build context up as many levels as you need - but this can have consequences.
If you're working in a microservices architecture that looks like this:
./Code/Repo1
./Code/Repo2
...
You can set the build context to the parent Code directory and then access everything, but it turns out that with a large number of repositories, this can result in the build taking a long time.
An example situation could be that another team maintains a database schema in Repo1 and your team's code in Repo2 depends on this. You want to dockerise this dependency with some of your own seed data without worrying about schema changes or polluting the other team's repository (depending on what the changes are you may still have to change your seed data scripts of course)
The second approach is hacky but gets around the issue of long builds:
Create a sh (or ps1) script in ./Code/Repo2 to copy the files you need and invoke the docker commands you want, for example:
#!/bin/bash
rm -r ./db/schema
mkdir ./db/schema
cp -r ../Repo1/db/schema ./db/schema
docker-compose -f docker-compose.yml down
docker container prune -f
docker-compose -f docker-compose.yml up --build
In the docker-compose file, simply set the context as Repo2 root and use the content of the ./db/schema directory in your dockerfile without worrying about the path.
Bear in mind that you will run the risk of accidentally committing this directory to source control, but scripting cleanup actions should be easy enough.

Build and Deploy C# Net Framework in GitLab CI CD

Asking very generally as I am completely new to Gitlabs CI/CD and CI/CD in general.
I have implemented a C# Net Framework application in Visual Studio 2017 and manage it with Gitlab.
Now I want to automatically create a binary of that application with the GitLab CI/CD feature.
I already have a runner ( docker based on Windows) registered in GitLab and run jobs.
My current YAML file looks as following:
default:
image: ruby:2.7.2
job:
stage: build
only:
- branches
script:
- set -m
- echo "Start build"
- '"C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\MSBuild.exe" /p:Configuration=Release /clp:ErrorsOnly; Build "NameOfApplication.sln"'
artifacts:
expire_in : 1 days
paths:
- '.\IndoorNavigation\bin\Release'
How can I now get the artifact / binary MSBuild generates? It should show up in pipelines for download right? I already tried the artifacts option with different path unsuccessfully.
Would also be happy for links to good tutorials as I have not found any tutorial that helped me.
Edit: Output of the Job:
Running with gitlab-runner 14.3.1 (8b63c432)
on docker-runner 2tZDZ6cX
Preparing the "docker" executor
Using Docker executor with image ruby:2.7.2 ...
Pulling docker image ruby:2.7.2 ...
Using docker image sha256:e6c92ed2f03be9788b80944e148783bef8e7d0fa8d9755b62e9f03429e85a327 for ruby:2.7.2 with digest ruby#sha256:1dd0106849233fcd913b7c4608078fa1a53a5e3ce1af2a55e4d726b0d8868e2f ...
Preparing environment
00:01
Running on runner-2tzdz6cx-project-1235-concurrent-0 via d98bb3402720...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/user/NameOfApplication/.git/
Checking out 2e50e922 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script
Using docker image sha256:e6c92ed2f03be9788b80944e148783bef8e7d0fa8d9755b62e9f03429e85a327 for ruby:2.7.2 with digest ruby#sha256:1dd0106849233fcd913b7c4608078fa1a53a5e3ce1af2a55e4d726b0d8868e2f ...
$ set -m
$ echo "Start build" - '"C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\MSBuild.exe" /p:Configuration=Release /clp:ErrorsOnly; Build "NameOfApplication.sln"'
Start build - "C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\MSBuild.exe" /p:Configuration=Release /clp:ErrorsOnly; Build "NameOfApplication.sln"
Uploading artifacts for successful job
00:01
Uploading artifacts...
WARNING: .\NameOfApplication\bin\Release: no matching files
ERROR: No files to upload
Cleaning up project directory and file based variables
00:01
Job succeeded

How to perform code analysis in sonarqube in docker in a ASP.NET web application

I have the official SonarQube docker image running successfully under http://localhost:32768/ (it is the one provided by Docker)
We want to perform some code analysys with C#.NET, this application is located in a folder called c:\myapplication (that is where I have the prj and sln)
My question is HOW! how can I tell my SonarCube, running in my docker http://localhost:32768/ , I want to analyse my C# code?
Thanks a lot!
Since the second answer provided seemed to be windows specific I decided to write down how it can be done in Linux, including the sonarqube installation.
1.) Run the command:
$sudo docker pull sonarqube
This will pull the docker image.
2.) Run the server using the command:
$sudo docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube
This will run the sonarqube server. You can then go to Firefox and visit the server at http://localhost:9000 and login with the password "admin" and user "admin.
------------------------------ Analyzing projects -------------------------------------------------------------------
1.) Download the sonarqube scanner from here:
https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner+for+MSBuild
2.) Unzip and place the files in a folder.
3.) Give executable permissions to the sonarcube file as follows:
$chmod +x <path-to-extracted-folder>/sonar-scanner-3.2.0.1227/bin/sonar-scanner
4.) Create a project using the user interface in firefox (localhost:9000).
5.) Go to the directory where the sln file is at and run the project commands. They will have one of the following forms:
$SonarScanner.MSBuild.exe begin /k:"project-key"
$MSBuild.exe /t:Rebuild
$SonarScanner.MSBuild.exe end
or
$dotnet <path to SonarScanner.MSBuild.dll> begin /k:"project-key"
$dotnet build
$dotnet <path to SonarScanner.MSBuild.dll> end
Either should work. Afterward you can see the your results in firefox.
To be pedantic about it, SonarQube doesn't analyze code. It presents the results of analyses to you (okay, it does actually do some further metric aggregation and calculation server-side). Instead, you want to tell the SonarQube Scanner for MSBuild to analyze your code. Doing so is fully documented.
Essentially
install the scanner
execute the Begin step (this tells the scanner to start listening)
(re)build your solution
execute the End step (tells the scanner to stop listening and process what it heard)
browse your project in the SonarQube interface.
The specific commands would look something like this:
SonarQube.Scanner.MSBuild.exe begin /k:"org.sonarqube:sonarqube-scanner-msbuild" /n:"Project Name" /v:"1.0"
MSBuild.exe /t:Rebuild
SonarQube.Scanner.MSBuild.exe end
Thanks for your answers, but after a couple of hours, I decided to write a post in my blog in how to do it step by step.
I know that there is plenty of documentation, but there are many bits and pieces to touch before you can see the analysis done properly.
I decided to shared my views and result with you guys,
http://netsourcecode.blogspot.co.uk/2017/01/continuous-code-quality-with-net.html
have fun!

ASP.NET Core WebApp not building on Travis CI

Getting the following error on TravisCI when trying to build an ASP.NET Core WebApp.
Could not find project file
/usr/lib/mono/xbuild/Microsoft/VisualStudio/v14.0/DotNet/Microsoft.DotNet.Props,
to import. Ignoring.
Builds on AppVeyor. Is there any way to install the missing file?
Note that I'm new to Travis CI so please include a reference (eg. link/step by step/guide) on how to implement your suggestion, thank you.
After trial and error we've came up with this:
Put these files into the root of your repo https://github.com/aspnet/KoreBuild/tree/1.0.0/template
Copy .travis.yml from a aspnet project e.g. https://github.com/aspnet/EntityFramework/blob/dev/.travis.yml
Remove parts you don't want, like branches and notifications
Make sure your solution and global.json is in the same directory as build.sh
I haven't found any documentation for it, so if it doesn't do what you want, you can just let it install dotnet and do what you want with commands yourself (e.g. dotnet publish)
Old answer:
If you don't solve the problem with xbuild, you can try using dotnet
cli. The install script for RTM is here:
https://raw.githubusercontent.com/dotnet/cli/rel/1.0.0-preview2/scripts/obtain/dotnet-install.sh
Then you use dotnet restore and dotnet build (cd to directory with
project.json)
Change your .travis.yml to this:
language: csharp
install: curl -s https://raw.githubusercontent.com/dotnet/cli/rel/1.0.0-preview2/scripts/obtain/dotnet-install.sh
| bash
script:
- dotnet restore WebApp/src/WebApp/project.json
- dotnet build WebApp/src/WebApp/project.json
addons:
apt:
packages:
- gettext
- libcurl4-openssl-dev
- libicu-dev
- libssl-dev
- libunwind8
- zlib1g
I'm not sure all of the apt packages are necessary. source:
http://andrewlock.net/adding-travis-ci-to-a-net-core-app/
It's also possible to use KoreBuild
https://github.com/aspnet/KoreBuild/tree/1.0.0/template
script: ./build.sh //Add the file to repo

Adding C# support to SCons on Mac

I found C# support for SCons (https://bitbucket.org/russel/scons_csharp/overview), but I don't know where to install (copy) the python scripts are copied into.
I installed Scons with brew command, so I have /usr/local/Cellar/scons/2.3.4 directory on my Mac.
What should be the next step to install the C# builders?
Please visit the index of all external SCons Tools at http://www.scons.org/wiki/ToolsIndex . Under the section "Install and usage" you can find a list of search directories for each platform.
Note that, since the C# support is not a core package, it's not installed into your default SCons distribution. Instead, it's treated like a customization (decoration?) of the standard sources...hence the machine/user-specific search paths.
Create a directory ~/.scons/site_scons/site_tools.
cd ~/.scons/site_scons/site_tools
hg clone https://bitbucket.org/russel/scons_csharp
Change one line (460) from csharp.py (~/.scons/site_scons/site_tools/scons_csharp/csharp.py).
env['CSC'] = env.Detect('mcs') or 'csc'
We need this change because the default setting for compiler (gmcs) is outdated.
Create build file: SConstruct.
env = Environment(
tools=['scons_csharp']
)
sources = ['Hello.cs']
prog = env.CLIProgram('myapp', sources)
Execute scons -Q to get:
mcs -nologo -noconfig -out:.../myapp.exe Hello.cs
References
http://www.scons.org/wiki/ToolsIndex
http://www.scons.org/wiki/CsharpBuilder

Categories