ANSI-Coloring Console Output with .NET - c#

I try to generate colored console output using ANSI escape codes with the following minimal C# program:
using System;
// test.cs
class foo {
static void Main(string[] args) {
Console.WriteLine("\x1b[36mTEST\x1b[0m");
}
}
I am running Ansicon v1.66 on Windows 7 x64 with csc.exe (Microsoft (R) Visual C# Compiler version 4.6.0081.0).
Colored output works fine in this configuration; Ansicon itself is working flawlessly.
To cross-check I use a node.js one-liner that is 100% equivalent to the C# program:
// test.js
console.log("\x1b[36mTEST\x1b[0m");
And, even more basic, a hand-crafted text file:
Both of which which correctly do the expected thing: Print a teal-colored string "TEST":
Only the test.exe I built with csc prints something else. Why?

I've created a small plugin (available on NuGet) that allows you to easily wrap your strings in ANSI color codes. Both foreground and background colors are supported.
It works by extending the String object, and the syntax is very simple:
"colorize me".Pastel("#1E90FF");
After which the string is ready to be printed to the console.

Your program needs to be compiled for /platform:x64 if you use the AnsiCon x64 environment and with /platform:x86 if you use the AnsiCon x86/32 bits version. The exact reason is a mystery...
Originally I thought you need all this:
You need to grab the StandardOutput and let the Console.WriteLine believe you write to a File instead of to a Console and use an ASCII encoding.
This is how it will work:
var stdout = Console.OpenStandardOutput();
var con = new StreamWriter(stdout, Encoding.ASCII);
con.AutoFlush = true;
Console.SetOut(con);
Console.WriteLine("\x1b[36mTEST\x1b[0m");
The .Net Console.WriteLine uses an internal __ConsoleStream that checks if the Console.Out is as file handle or a console handle. By default it uses a console handle and therefor writes to the console by calling WriteConsoleW. In the remarks you find:
Although an application can use WriteConsole in ANSI mode to write ANSI characters, consoles do not support ANSI escape sequences. However, some functions provide equivalent functionality. For more information, see SetCursorPos, SetConsoleTextAttribute, and GetConsoleCursorInfo.
To write the bytes directly to the console without WriteConsoleW interfering a simple filehandle/stream will do which is achieved by calling OpenStandardOutput. By wrapping that stream in a StreamWriter so we can set it again with Console.SetOut we are done. The byte sequences are send to the OutputStream and picked up by AnsiCon.
Do notice that this is only useable with an applicable terminal emulator, like AnsiCon, as shown here:

I encountered this question today and I could not get the accepted answer to work. After some research on my own I found an answer which will get it to work.
It is a pitty, but we need to go very low level on this and call the Windows API directly. For this purpose I'm using the PInvoke.Kernel32 NuGet for convenience reasons, but if it is too heavy-weight for you, you might create the P\Invoke mapping yourself.
The following method illustrates, how one may activate the ANSI Codes:
bool TryEnableAnsiCodesForHandle(Kernel32.StdHandle stdHandle)
{
var consoleHandle = Kernel32.GetStdHandle(stdHandle);
if (Kernel32.GetConsoleMode(consoleHandle, out var consoleBufferModes) &&
consoleBufferModes.HasFlag(Kernel32.ConsoleBufferModes.ENABLE_VIRTUAL_TERMINAL_PROCESSING))
return true;
consoleBufferModes |= Kernel32.ConsoleBufferModes.ENABLE_VIRTUAL_TERMINAL_PROCESSING;
return Kernel32.SetConsoleMode(consoleHandle, consoleBufferModes);
}
To enable it for StdOut you call it like:
TryEnableAnsiCodesForHandle(Kernel32.StdHandle.STD_OUTPUT_HANDLE);
If the method returns true, the ANSI Codes are enabled, else they are not.
The solution uses the very low level Windows API GetConsoleMode and SetConsoleMode to check if the control buffer mode ENABLE_VIRTUAL_TERMINAL_PROCESSING is set, and if it is not set it tries to set the mode.

Related

ICE: trying to add a local var with the same name, but different types. during [_RegisterClipboardFormat]

I have a PoC to use some existing Java-codebase in some UWP-app using the most current Visual Studio Community 19 version 16.3.2 and the latest released IKVM 8.1.7195.0. The app builds and runs fine in Debug-mode, but fails to build already in Release-mode with the following error:
MCG0004:InternalAssert Assert Failed: ICE: trying to add a local var
with the same name, but different types. during
[_RegisterClipboardFormat] Ams.Oms.Poc
RegisterClipboardFormat is part of IKVM:
#DllImportAttribute.Annotation(value = "user32.dll", EntryPoint = "RegisterClipboardFormat")
private native static int _RegisterClipboardFormat(String format);
#cli.System.Security.SecuritySafeCriticalAttribute.Annotation
private static int RegisterClipboardFormat(String format)
{
return _RegisterClipboardFormat(format);
}
https://github.com/ikvm-revived/ikvm/blob/master/openjdk/sun/awt/IkvmDataTransferer.java#L95
What I'm wondering is which local variable the error message is referring to? Might be something added implicitly or might have to do with String in Java vs. string in C#? OTOH that file is clearly named .java.
Didn't find much about the error message in general, only the following two links seems to be more interesting:
Variables having same name but different type
Why doesn't C# allow me to use the same variable name in different scopes?
So I'm currently even unsure where the message comes from, Visual Studio/C# directly or IKVM during running code during building Release-mode. I strongly suspect the error is coming from Visual Studio/C#, though.
Searching for the function itself doesn't reveal much of help as well:
Sorry, AWT is not a supported part of IKVM.
https://sourceforge.net/p/ikvm/bugs/225/
Others seemed to have the same problem, because CN1 simply disabled that code entirely in their fork of IKVM:
//#DllImportAttribute.Annotation(value = "user32.dll", EntryPoint = "RegisterClipboardFormat")
//private native static int _RegisterClipboardFormat(String format);
#cli.System.Security.SecuritySafeCriticalAttribute.Annotation
private static int RegisterClipboardFormat(String format)
{
throw new Error("Not implemented");
//return _RegisterClipboardFormat(format);
}
https://github.com/ams-ts-ikvm/cn1-ikvm-uwp/blob/master/openjdk/sun/awt/IkvmDataTransferer.java#L95
Any ideas? Thanks!
There seems to be a workaround by not changing any code at all: The settings of the Release-build contain a checkbox if to use the .NET native toolbox for the build, which is enabled by default. By disabling that the build succeeds without any code change and is as fast as the Debug-build again. Before changing that, the Release-build took a lot longer as well.
Don't know what that means regarding actually calling native code, if that fails or not, because my app doesn't use those. I guess it would fail, depending on if it works in Debug or not. Additionally, I'm not sure if the Windows store accepts such a modified Release-build, but as UWP-apps aren't forced to use native code at all, I guess there's a good chance things are going to work.

G1ANT - disposing of unmanaged code in C# macros

I am enjoying using G1ANT's "macros" capability to call unmanaged code, but the unmanaged objects are of course not being automatically garbage collected absent code to do it.
My request is specifically for best practices in disposing of unmanaged code in these G1ANT C# macros, not for disposing of the same in C# generally, and it is not a request to fix the code below, which runs as is just fine.
If I were coding in C# using Visual Studio, I would likely use a System.Runtime.InteropServices.SafeHandle class, override the Finalize method, or use one of the other approaches in common use (see also this post on disposing of unmanaged objects in C#).
But none of these approaches appear to be a good fit for G1ANT macros per se, at least with my novice experience of them.
For illustration purposes I'm referring to this G1ANT code, but WITHOUT the last line in the macro (ahk.Reset()), because it runs fine with that line, more than once. (I'm painfully aware that there must be a much better example, but as I'm new to G1ANT, this is the only thing I have so far.) What I'm after is C# code that works in G1ANT when there is no explicit disposal of the unmanaged object:
addon core version 4.100.19170.929
addon language version 4.100.19170.929
-dialog ♥macrodlls
♥macrodlls = System.dll,System.Drawing.dll,System.Windows.Forms.dll,AutoHotkey.Interop.dll,System.Runtime.InteropServices.dll
-dialog ♥macrodlls
♥macronamespaces = System,AutoHotkey.Interop,System.Windows.Forms
⊂
var ahk = AutoHotkeyEngine.Instance;
//Load a library or exec scripts in a file
ahk.LoadFile("functions.ahk");
//execute a specific function (found in functions.ahk), with 2 parameters
ahk.ExecFunction("MyFunction", "Hello", "World");
string sayHelloFunction = "SayHello(name) \r\n { \r\n MsgBox, Hello %name% \r\n return \r\n }";
ahk.ExecRaw(sayHelloFunction);
//execute's newly made function\
ahk.ExecRaw(#"SayHello(""Mario"") ");
var add5Results = ahk.ExecFunction("Add5", "5");
MessageBox.Show("ExecFunction: Result of 5 with Add5 func is" + add5Results);
addon core version 4.100.19170.929
addon language version 4.100.19170.929
-dialog ♥macrodlls
♥macrodlls = System.dll,System.Drawing.dll,System.Windows.Forms.dll,AutoHotkey.Interop.dll,System.Runtime.InteropServices.dll,System.Reflection.dll,Microsoft.CSharp.dll
-dialog ♥macrodlls
♥macronamespaces = System,AutoHotkey.Interop,System.Windows.Forms,System.Reflection
⊂
var ahk = AutoHotkeyEngine.Instance;
//Load a library or exec scripts in a file
ahk.LoadFile("functions.ahk");
//execute a specific function (found in functions.ahk), with 2 parameters
ahk.ExecFunction("MyFunction", "Hello", "World");
string sayHelloFunction = "SayHello(name) \r\n { \r\n MsgBox, Hello %name% \r\n return \r\n }";
ahk.ExecRaw(sayHelloFunction);
//executes new function
ahk.ExecRaw(#"SayHello(""Mario"") ");
var add5Results = ahk.ExecFunction("Add5", "5");
MessageBox.Show("ExecFunction: Result of 5 with Add5 func is" + add5Results);
ahk.Reset();
⊃
⊃
It's taken nearly verbatim from the AutoHotkey.Interop github page.
Without the last line in the macro ('ahk.Reset()), the code runs perfectly the first time through, but on the second run G1ANT still sees the included AutoHotkey file, and warns of duplicate function definitions, but continues and still functions properly. The as-far-as-I-can-tell-undocumented AutoHotkey.Interop command Reset() takes care of the garbage collection problem by calling
public void Terminate()
{
AutoHotkeyDll.ahkTerminate(1000);
}
public void Reset() {
Terminate();
AutoHotkeyDll.ahkReload();
AutoHotkeyDll.ahktextdll("", "", "");
}
Thus, the AutoHotkeyEngine instance itself appears to be garbage collected, even without the ahk.Reset();, but the AutoHotkey script it loads into an object is not.
Stopping the G1ANT.Robot application and restarting, then reloading the script above (as mentioned, without the line ahk.Reset();), works just fine, but once again only for a single run.
Edit: The given answer's advice on treatment of singletons is what I will use henceforth when loading of AutoHotkey function scripts and the DLL itself. It seems prudent and good practice to check to see if the DLL or function file have been loaded, whether problems exist or not. "An ounce of prevention", etc. In addition, I have forked the AutoHotkey.Interop repo here, adding a boolean check to see if the AutoHotkeyEngine instance is ready.
Best regards,
burque505
You use AutoHotkeyEngine.Instance, so I guess it's a singleton. It will stay loaded in memory as long as the corresponding dll is kept there, and the latter is loaded and lives as long as the its domain lives. The macro app domain (the place where script stuff is placed) currently lives as long as Robot's app domain, so in fact your singleton instance lives as long as Robot.
Either:
don't use singleton,
or reset it right after obtaining the instance (kinda what you already did),
or treat it as a singleton that has life span longer than your app. In this case after obtaining singleton instance do a check if your functions file has been already loaded and only load it if it wasn't done already.

Is there a way to turn several csharp repl commands into a terminal alias?

Every day I make a GUID which I copy to my clipboard.
I do this by opening my terminal, writing csharp (see link below in case you are confused), writing GUID.NewGuid(), copying the output and writing quit.
Is there any way I can turn this whole procedure into a terminal alias?
Edit:
Just to clarify, I'm using this:
https://www.mono-project.com/docs/tools+libraries/tools/repl/
You can write and compile a console application, the question was geared towards whether you can inject statements directly into the command-line tool, not how to make a tiny executable.
There is an easy command from BSD to generate a UUID, it's available in macOS.
uuidgen
If you need to copy the UUID result to clipboard, use this:
uuidgen | pbcopy
So, what's the difference between UUID and GUID? Check out this thread.
Create a C# program
using System;
namespace guid
{
class MainClass
{
public static void Main(string[] args)
{
new MainClass().run();
}
private void run()
{
Console.WriteLine(Guid.NewGuid());
}
}
}
Compile to new-guid
Use in zsh like this
guid=$(./new-guid) #you may have to change the `.` to the appropriate path, depending on where the program is.
echo "${guid}"
Tested with zsh and mono-develop in Debian Gnu/Linux.
Note there are probably better ways to do this. One line purl script, or may be some Unix command.
Here's the answer I was looking for, in this case:
csharp -e 'Guid.NewGuid();' | pbcopy

Boost.Interprocess v1.66 - get_bootstamp segfault with C#

I have problem with Boost.Interprocess (v1.66) library which I use in my C/C++ library which I use in C# through Marshalling (calling C native code from C#).
I found the problem if I was using Boost.Interprocess named_semaphore for sync between processes. (in open_or_create mode)
If I use my C/C++ lib with another native C/C++ code everything works fine (under newest Windows 10, Linux (4+ kernel) and even Mac OS X (>=10.11)).
The problem occurred under Windows - with C# I have C wrapper around C++ code. If I use Marshalling with simple own-build EXE --> Everything works! But If I use The same C# code (with the same C lib) in the third party application as a DLL plugin I got segfault from get_bootstamp in named_semaphore.
So I have third-party C# SW for which I create plugins (C# DLL). In that plugin I use my C library through marshalling. Marshalling work fine in test C# project (which just call C functions from C lib) but same code segfault in third-party SW.
C Library workflow:
Init all necessary C structures
Start desired TCP server (native C/C++ app) using Boost.Process
Wait for server (through named_semaphore) <-- segfault
Connect to the server...
C# code has same workflow.
Found the problem
The problem occured in boost::interprocess::ipcdetail::get_booststamp (which is called in named_semaphore). here:
struct windows_bootstamp
{
windows_bootstamp()
{
//Throw if bootstamp not available
if(!winapi::get_last_bootup_time(stamp)){
error_info err = system_error_code();
throw interprocess_exception(err);
}
}
//Use std::string. Even if this will be constructed in shared memory, all
//modules/dlls are from this process so internal raw pointers to heap are always valid
std::string stamp;
};
inline void get_bootstamp(std::string &s, bool add = false)
{
const windows_bootstamp &bootstamp = windows_intermodule_singleton<windows_bootstamp>::get();
if(add){
s += bootstamp.stamp;
}
else{
s = bootstamp.stamp;
}
}
If I debug to the line
const windows_bootstamp &bootstamp = windows_intermodule_singleton<windows_bootstamp>::get()
booststamp.stamp is not readable. The size is set to 31, capacity is set to some weird value (like 19452345) and the data is not readable. If i step over to
s += bootstamp.stamp;
the segfault occured!
Found the reason
I debug once more and set debug point to the windows_bootstamp constructor entry and I got no hit so the stamp is not initialized (I guess).
Confirmation
If I change get_bootstamp to
inline void get_bootstamp(std::string &s, bool add = false)
{
const windows_bootstamp &bootstamp = windows_intermodule_singleton<windows_bootstamp>::get();
std::string stamp;
winapi::get_last_bootup_time(stamp);
if(add){
s += stamp;
}
else{
s = stamp;
}
}
Recompile my lib and exe - everything works fine (without any problem).
My question is - what I am doing wrong? I read Boost.Interprocess doc really thoroughly but there are no advice/warnings about my problem (yeah there is "COM Initialization" in Interprocess doc but it not seems helpfull).
Or it's just a bug in Boost.interprocess and I may report it to Boost bug tracker?
Notice - if I start server manually (before I run C# code) It works without segfaults

Why do 'requires' statements fail when loading (iron)ruby script via a C# program?

IronRuby and VS2010 noob question:
I'm trying to do a spike to test the feasibility of interop between a C# project and an existing RubyGem rather than re-invent that particular wheel in .net. I've downloaded and installed IronRuby and the RubyGems package, as well as the gem I'd ultimately like to use.
Running .rb files or working in the iirb Ruby console is without problems. I can load the both the RubyGems package, and the gem itself and use it, so, at least for that use case, my environment is set up correctly.
However, when I try to do the same sort of thing from within a C# (4.0) console app, it complains about the very first line:
require 'RubyGems'
With the error:
no such file to load -- rubygems
My Console app looks like this:
using System;
using IronRuby;
namespace RubyInteropSpike
{
class Program
{
static void Main(string[] args)
{
var runtime = Ruby.CreateRuntime();
var scope = runtime.ExecuteFile("test.rb");
Console.ReadKey();
}
}
}
Removing the dependencies and just doing some basic self-contained Ruby stuff works fine, but including any kind of 'requires' statement seems to cause it to fail.
I'm hoping that I just need to pass some additional information (paths, etc) to the ruby runtime when I create it, and really hoping that this isn't some kind of limitation, because that would make me sad.
Short answer: Yes, this will work how you want it to.You need to use the engine's SetSearchPaths method to do what you wish.
A more complete example
(Assumes you loaded your IronRuby to C:\IronRubyRC2 as the root install dir)
var engine = IronRuby.Ruby.CreateEngine();
engine.SetSearchPaths(new[] {
#"C:\IronRubyRC2\Lib\ironruby",
#"C:\IronRubyRC2\Lib\ruby\1.8",
#"C:\IronRubyRC2\Lib\ruby\site_ruby\1.8"
});
engine.Execute("require 'rubygems'"); // without SetSearchPaths, you get a LoadError
/*
engine.Execute("require 'restclient'"); // install through igem, then check with igem list
engine.Execute("puts RestClient.get('http://localhost/').body");
*/
Console.ReadKey();

Categories