C# Ignore inside of speech, while changing other values - c#

I am trying to make a basic translator, which changes values in the code, for example, a . may be ::, e.t.c, and I can do that by using
if(code.Contains("."))
{
code.Replace(".", "::");
}
But my problem is I don't know how to ignore it in the inside of speech, as if the sentence was "Hello.", it could be translated to "Hello::". How would I be able to stop this? (I know you can use Regex "\".+?\"" to find speech in text)

What you're trying to do here is vastly more complicated than you realise.
Programming languages differ in more than just appearance, they have different capabilities and syntax rules, even for two as similar as C# and C++. Aside from the fact that the C# . is equivalent to . -> and :: in C++. There's also different rules regarding pointers, and sometimes you get issues like having a pointer to a pointer, not to mention that the * and & symbols can be binary arithmetic/logic operations or pointer operations depending on their use. There's also issues involving keywords such as const, auto and sizeof.
In short, unless you're prepared to write a proper tokeniser, you aren't going to pull this off properly. To properly translate one programming language to another you would at least have to write a good chunk of a full compiler, which is a specialist subject.
I suggest you do some research into tokenisers and lexical analysis before you go any further.
As a hint though, you'll find it easier to split your code up into an array of characters and handle one character at a time whilst keeping track of the program's state (ie are you currently in the middle of a string, are you in the middle of two brackets, how many levels of nesting have you come across). By doing it that way you can at least manage to change the surface-level differences (as opposed to deeper ones like keywords and typing).
EDIT:
Some useful resources for writing tokenisers and compilers:
http://www.cs.man.ac.uk/~pjj/farrell/comp3.html
http://msdn.microsoft.com/en-us/library/vstudio/3yx2xe3h(v=vs.100).aspx
http://en.wikibooks.org/wiki/Compiler_Construction/Lexical_analysis
I speak from experience, as I did recently attempt to write my own compiler (in C#), but put the project on hold due to more important matters getting in the way.

You could probably use a regex to help you out here, but it would be fairly complex and perform poorly. You could also just do this:
var sb = new StringBuilder();
bool insideSpeech = false;
foreach(char c in code) {
if(c == '"') {
insideSpeech = !insideSpeech;
}
if(c == '.' && !insideSpeech) {
sb.Append("::");
} else {
sb.Append(c);
}
}
code = sb.ToString();

Related

WebMatrix SQL injection & XSS concerns after paramaterizing queries

I have a simple question that concerns security in WebMatrix razor (C#)
I (for a very brief period of time) considered using character checking to help validate forms to combat SQL injection, however, after realizing that parameterizing queries is the best way to combat this, I have since commented out this check which looks something like this:
foreach(char c in POIName)
{
if(c=='\'' || c=='$' || c=='\"' || c=='&' || c=='%' || c=='#' || c=='-' || c=='<' || c=='>')
{
errorMessage = "You have entered at least one invalid character in the \"POI Name\" field. Invalid characters are: [\'], [\"], [&], [$], [#], [-], [<], [>], and [%]";
}
}
Now, my question is this:
Should I remove lines like these entirely, as they are not needed (to allow maximum user freedom in textareas etc.) or should there still be some character checking specific to WebMatrix (or not specific to it for that matter).
Perhaps not allow "#" or ";" or "<" or ">"??
From what I understand paramaterizing the queries (as I have) auto escapes harmful characters and re-wraps the whole thing as a string (or something like that anyway, lol), and my database has proven impervious to any of the SQL injection attacks I have thrown at it.
I guess I am concerned because I am not sure if razor can be effected somehow or if I should worry about XSS (no I am not a blog or anything that should ever accept html tags or even angle brackets of any kind).
Sorry if this question is kind of all over the place, but I don't know where else to turn to find as definitive of an answer as Stack Overflow usually provides (e.g., I trust this community much more than a few Google searches, which I assure I have tried before posting).
Thanks for any help!
Ok, two points here. You are talking about engaging in "Black Listing" in order to scrub user input. The better approach is "White Listing". Define what the allowed characters are. There are many ways in which an attacker can creatively get around a quickly constructed Black List.
Secondly, XSS is a tricky beast. Encoding and White Listing can be used to combat it. I would recommend looking at Microsofts Ant-XSS Library for advice. But generally, you're going to want to use an encoding that's appropriate to the section of the page you are trying to protect.
In summary, the best advice I can give you is to become familiar with OWASP, specifically the OWASP Top 10. It's an amazing resource for securing your applications.

evaluate an arithmetic expression stored in a string (C#)

I'm working on a application in C# in which I want to calculate an arithmetic expression that is given as a string.
So like I got a string:
string myExpr="4*(80+(5/2))+2";
And I want to calculate the outcome of the arithmetic expression.
While in a language such as Javascript, PHP etc. you could just use Eval to do the trick this doesnt seem to be an option in C#.
I suppose it is possible to write a code to devide it into countless simple expressions, calculate them and add them together but this would take quite some time and I'm likely to have lots of troubles in my attempt to do so.
So... my question, Is there any 'simple' way to do this?
There's a javascript library you can reference, then just do something like:
var engine = VsaEngine.CreateEngine();
Eval.JScriptEvaluate(mySum, engine);
Edit;
Library is Microsoft.JScript
You could just call the JScript.NET eval function. Any .NET language can call into any other.
Have you seen http://ncalc.codeplex.com ?
It's extensible, fast (e.g. has its own cache) enables you to provide custom functions and varaibles at run time by handling EvaluateFunction/EvaluateParameter events. Example expressions it can parse:
Expression e = new Expression("Round(Pow(Pi, 2) + Pow([Pi2], 2) + X, 2)");
e.Parameters["Pi2"] = new Expression("Pi * Pi");
e.Parameters["X"] = 10;
e.EvaluateParameter += delegate(string name, ParameterArgs args)
{
if (name == "Pi")
args.Result = 3.14;
};
Debug.Assert(117.07 == e.Evaluate());
It also handles unicode & many data type natively. It comes with an antler file if you want to change the grammer. There is also a fork which supports MEF to load new functions.
It also supports logical operators, date/time's strings and if statements.
I've used NCalc with great success. It's extremely flexible and allows for variables in your formulas. The formula you listed in your question could be evaluated this easily:
string myExpr = "4*(80+(5/2))+2";
decimal result = Convert.ToDecimal(new Expression(myExpr).Evaluate());
You need to implement an expression evaluator. It's fairly straightforward if you have the background, but it's not "simple". Eval in interpreted environments actually re-runs the language parser over the string; you need to emulate that operation, for the bits you care about, in your C# code.
Search for "expression evaluators" and "recursive descent parser" to get started.
If you have Bjarne Stroustrup's The C++ Programming Language, in Chapter 6 he explains step by step (in C++) how to do exactly what Chris Tavares suggests.
It's straightforward but a little heady if you're not familiar with the procedure.
I needed to do something similar for an undergrad projectand I found this
Reverse Polish Notation In C#
Tutorial and code to be extremely valuable.
It's pretty much just an implementation of converting the string to Reverse Polish Notation then evaluating it. It's extremely easy to use, understand and add new functions.
Source code is included.
Try something like this:
int mySum = 4*(80+(5/2))+2;
var myStringSum = mySum.toString();

C# better way to do this?

Hi I have this code below and am looking for a prettier/faster way to do this.
Thanks!
string value = "HelloGoodByeSeeYouLater";
string[] y = new string[]{"Hello", "You"};
foreach(string x in y)
{
value = value.Replace(x, "");
}
You could do:
y.ToList().ForEach(x => value = value.Replace(x, ""));
Although I think your variant is more readable.
Forgive me, but someone's gotta say it,
value = Regex.Replace( value, string.Join("|", y.Select(Regex.Escape)), "" );
Possibly faster, since it creates fewer strings.
EDIT: Credit to Gabe and lasseespeholt for Escape and Select.
While not any prettier, there are other ways to express the same thing.
In LINQ:
value = y.Aggregate(value, (acc, x) => acc.Replace(x, ""));
With String methods:
value = String.Join("", value.Split(y, StringSplitOptions.None));
I don't think anything is going to be faster in managed code than a simple Replace in a foreach though.
It depends on the size of the string you are searching. The foreach example is perfectly fine for small operations but creates a new instance of the string each time it operates because the string is immutable. It also requires searching the whole string over and over again in a linear fashion.
The basic solutions have all been proposed. The Linq examples provided are good if you are comfortable with that syntax; I also liked the suggestion of an extension method, although that is probably the slowest of the proposed solutions. I would avoid a Regex unless you have an extremely specific need.
So let's explore more elaborate solutions and assume you needed to handle a string that was thousands of characters in length and had many possible words to be replaced. If this doesn't apply to the OP's need, maybe it will help someone else.
Method #1 is geared towards large strings with few possible matches.
Method #2 is geared towards short strings with numerous matches.
Method #1
I have handled large-scale parsing in c# using char arrays and pointer math with intelligent seek operations that are optimized for the length and potential frequency of the term being searched for. It follows the methodology of:
Extremely cheap Peeks one character at a time
Only investigate potential matches
Modify output when match is found
For example, you might read through the whole source array and only add words to the output when they are NOT found. This would remove the need to keep redimensioning strings.
A simple example of this technique is looking for a closing HTML tag in a DOM parser. For example, I may read an opening STYLE tag and want to skip through (or buffer) thousands of characters until I find a closing STYLE tag.
This approach provides incredibly high performance, but it's also incredibly complicated if you don't need it (plus you need to be well-versed in memory manipulation/management or you will create all sorts of bugs and instability).
I should note that the .Net string libraries are already incredibly efficient but you can optimize this approach for your own specific needs and achieve better performance (and I have validated this firsthand).
Method #2
Another alternative involves storing search terms in a Dictionary containing Lists of strings. Basically, you decide how long your search prefix needs to be, and read characters from the source string into a buffer until you meet that length. Then, you search your dictionary for all terms that match that string. If a match is found, you explore further by iterating through that List, if not, you know that you can discard the buffer and continue.
Because the Dictionary matches strings based on hash, the search is non-linear and ideal for handling a large number of possible matches.
I'm using this methodology to allow instantaneous (<1ms) searching of every airfield in the US by name, state, city, FAA code, etc. There are 13K airfields in the US, and I've created a map of about 300K permutations (again, a Dictionary with prefixes of varying lengths, each corresponding to a list of matches).
For example, Phoenix, Arizona's main airfield is called Sky Harbor with the short ID of KPHX. I store:
KP
KPH
KPHX
Ph
Pho
Phoe
Ar
Ari
Ariz
Sk
Sky
Ha
Har
Harb
There is a cost in terms of memory usage, but string interning probably reduces this somewhat and the resulting speed justifies the memory usage on data sets of this size. Searching happens as the user types and is so fast that I have actually introduced an artificial delay to smooth out the experience.
Send me a message if you have the need to dig into these methodologies.
Extension method for elegance
(arguably "prettier" at the call level)
I'll implement an extension method that allows you to call your implementation directly on the original string as seen here.
value = value.Remove(y);
// or
value = value.Remove("Hello", "You");
// effectively
string value = "HelloGoodByeSeeYouLater".Remove("Hello", "You");
The extension method is callable on any string value in fact, and therefore easily reusable.
Implementation of Extension method:
I'm going to wrap your own implementation (shown in your question) in an extension method for pretty or elegant points and also employ the params keyword to provide some flexbility passing the arguments. You can substitute somebody else's faster implementation body into this method.
static class EXTENSIONS {
static public string Remove(this string thisString, params string[] arrItems) {
// Whatever implementation you like:
if (thisString == null)
return null;
var temp = thisString;
foreach(string x in arrItems)
temp = temp.Replace(x, "");
return temp;
}
}
That's the brightest idea I can come up with right now that nobody else has touched on.

Identifying a C# or C++ function start in a line count program

I have a program, written in C#, that when given a C++ or C# file, counts the lines in the file, counts how many are in comments and in designer-generated code blocks. I want to add the ability to count how many functions are in the file and how many lines are in those functions. I can't quite figure out how to determine whether a line (or series of lines) is the start of a function (or method).
At the very least, a function declaration is a return type followed by the identifier and an argument list. Is there a way to determine in C# that a token is a valid return type? If not, is there any way to easily determine whether a line of code is the start of a function? Basically I need to be able to reliably distinguish something like.
bool isThere()
{
...
}
from
bool isHere = isThere()
and from
isThere()
As well as any other function declaration lookalikes.
The problem with doing this is to do it accurately, you must take into account all of the possible ways a C# function can be defined. In essence, you need to write a parser. Doing so is beyond the scope of a simple SO answer.
There will likely be a lot of answers to this question in the form of regex's and they will work for common cases but will likely blow up in corner cases like the following
int
?
/* this
is */
main /* legal */ (code c) {
}
Start by scanning scopes. You need to count open braces { and close braces } as you work your way through the file, so that you know which scope you are in. You also need to parse // and /* ... */ as you scan the file, so you can tell when something is in a comment rather than being real code. There's also #if, but you would have to compile the code to know how to interpret these.
Then you need to parse the text immediately prior to some scope open braces to work out what they are. Your functions may be in global scope, class scope, or namespace scope, so you have to be able to parse namespaces and classes to identify the type of scope you are looking at. You can usually get away with fairly simple parsing (most programmers use a similar style - for example, it's uncommon for someone to put blank lines between the 'class Fred' and its open brace. But they might write 'class Fred {'. There is also the chance that they will put extra junk on the line - e.g. 'template class __DECLSPEC MYWEIRDMACRO Fred {'. However, you can get away with a pretty simple "does the line contain the word 'class' with whitespace on both sides? heuristic that will work in most cases.
OK, so you now know that you are inside a namepace, and inside a class, and you find a new open scope. Is it a method?
The main identifying features of a method are:
return type. This could be any sequence of characters and can be many tokens ("__DLLEXPORT const unsigned myInt32typedef * &"). Unless you compile the entire project you have no chance.
function name. A single token (but watch out for "operator =" etc)
an pair of brackets containing zero or more parameters or a 'void'. This is your best clue.
A function declaration will not include certain reserved words that will precede many scopes (e.g. enum, class, struct, etc). And it may use some reserved words (template, const etc) that you must not trip over.
So you could search up for a blank line, or a line ending in ; { or } that indicates the end of the previous statement/scope. Then grab all the text between that point and the open brace of your scope. Then extract a list of tokens, and try to match the parameter-list brackets. Check that none of the tokens are reserved words (enum, struct, class etc).
This will give you a "reasonable degree of confidence" that you have a method. You don't need much parsing to get a pretty high degree of accuracy. You could spend a lot of time finding all the special cases that confuse your "parser", but if you are working on a reasonably consistent code-base (i.e. just your own company's code) then you'll probably be able to identify all the methods in the code fairly easily.
I'd probably use a regular expression, though given the number of datatypes and declaration options and user defined types/clases, it would be non-trivial. To simply avoid capturing assignments from function calls, you might start with a Regex (untested) like:
(private|public|internal|protected|virtual)?\s+(static)?\s+(int|bool|string|byte|char|double|long)\s+([A-Za-z][A-Za-z_0-9]*)\s*\(
This doesn't (by a long shot) catch everything, and you'd need to tune it up.
Another approach could involve reflection to determine function declarations, but that's probably not appropriate when you want to do static source code analysis.
If you want to write a real parser (I know you might not want to) then try ANTLR. If nothing else it will be a fun project
Is there a way to determine in C# that a token is a valid return type?
You can determine that it's either a return type or an error pretty easily (by making sure it's not anything else that could be in that position). And you probably don't need to guarantee "correct" behaviour on invalid code.
Then you look for the parentheses.

What's the best way to write a parser by hand? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
We've used ANTLR to create a parser for a SQL-like grammar, and while the results are satisfactory in most cases, there are a few edge cases that we need to fix; and since we didn't write the parser ourselves we don't really understand it well enough to be able to make sensible changes.
So, we'd like to write our own parser. What's the best way to go about writing a parser by hand? What sort of parser should we use - recursive descent has been recommended; is that right? We'll be writing it in C#, so any tutorials for writing parsers in that language would be gratefully received.
UPDATE: I'd also be interested in answers that involve F# - I've been looking for a reason to use that in a project.
I would highly recommend the F# language as your language of choice for parsing on the .NET Platform. It's roots in the ML family of languages means it has excellent support for language-oriented programming.
Discriminated unions and pattern-matching allow for a very succinct and powerful specification of your AST. Higher-order functions allow for definition of parse operations and their composition. First-class support for monadic types allows for state management to be handled implicitly greatly simplifying the composition of parsers. Powerful type-inference greatly aides the definition of these (complex) types. And all of this can be specified and executed interactively allowing you to rapidly prototype.
Stephan Tolksdorf has put this into practice with his parser combinator library FParsec
From his examples we see how naturally an AST is specified:
type expr =
| Val of string
| Int of int
| Float of float
| Decr of expr
type stmt =
| Assign of string * expr
| While of expr * stmt
| Seq of stmt list
| IfThen of expr * stmt
| IfThenElse of expr * stmt * stmt
| Print of expr
type prog = Prog of stmt list
the implementation of the parser (partially elided) is just as succinct:
let stmt, stmtRef = createParserForwardedToRef()
let stmtList = sepBy1 stmt (ch ';')
let assign =
pipe2 id (str ":=" >>. expr) (fun id e -> Assign(id, e))
let print = str "print" >>. expr |>> Print
let pwhile =
pipe2 (str "while" >>. expr) (str "do" >>. stmt) (fun e s -> While(e, s))
let seq =
str "begin" >>. stmtList .>> str "end" |>> Seq
let ifthen =
pipe3 (str "if" >>. expr) (str "then" >>. stmt) (opt (str "else" >>. stmt))
(fun e s1 optS2 ->
match optS2 with
| None -> IfThen(e, s1)
| Some s2 -> IfThenElse(e, s1, s2))
do stmtRef:= choice [ifthen; pwhile; seq; print; assign]
let prog =
ws >>. stmtList .>> eof |>> Prog
On the second line, as an example, stmt and ch are parsers and sepBy1 is a monadic parser combinator that takes two simple parsers and returns a combination parser. In this case sepBy1 p sep returns a parser that parses one or more occurrences of p separated by sep. You can thus see how quickly a powerful parser can be combined from simple parsers. F#'s support for overridden operators also allow for concise infix notation e.g. the sequencing combinator and the choice combinator can be specified as >>. and <|>.
Best of luck,
Danny
The only kind of parser that can be handwritten by a sane human being is a recursive-descent. That said, writing bottom-up parser by hand is still possible but is very undesirable.
If you're up for RD parser you have to verify that SQL grammar is not left-recursive (and eliminate the recursion if necessary), and then basically write a function for each grammar rule. See this for further reference.
Adding my voice to the chorus in favor of recursive-descent (LL1). They are simple, fast, and IMO, not at all hard to maintain.
However, take a good look at your language to make sure it is LL1. If you have any syntax like C has, like ((((type))foo)[ ]) where you might have to descend multiple layers of parentheses before you even find out if you are looking at a type, variable, or expression, then LL1 will be very difficult, and bottom-up wins.
Recursive descent will give you the simplest way to go, but I would have to agree with mouviciel that flex and bison and definitely worth learning. When you find out you have a mistake in your grammar, fixing a definition of the language in flex /bison will be a hell of a lot easier then rewriting your recursive descent code.
FYI the C# parser is written recursive descent and it tends to be quite robust.
Recursive Descent parsers are indeed the best, maybe only, parsers that can be built by hand. You will still have to bone-up on what exactly a formal, context-free language is and put your language in a normal form. I would personally suggest that you remove left-recursion and put your language in Greibach Normal Form. When you do that, the parser just about writes itself.
For example, this production:
A => aC
A => bD
A => eF
becomes something simple like:
int A() {
chr = read();
switch char
case 'a': C();
case 'b': D();
case 'e': F();
default: throw_syntax_error('A', chr);
}
And there aren't any much more difficult cases here (what's more difficult is make sure your grammar is in exactly the right form but this allows you the control that you mentioned).
Anton's Link is also seems excellent.
If you want to write it by hand, recursive decent is the most sensible way to go.
You could use a table parser, but that will be extremely hard to maintain.
Example:
Data = Object | Value;
Value = Ident, '=', Literal;
Object = '{', DataList, '}';
DataList = Data | DataList, Data;
ParseData {
if PeekToken = '{' then
return ParseObject;
if PeekToken = Ident then
return ParseValue;
return Error;
}
ParseValue {
ident = TokenValue;
if NextToken <> '=' then
return Error;
if NextToken <> Literal then
return Error;
return(ident, TokenValue);
}
ParseObject {
AssertToken('{');
temp = ParseDataList;
AssertToken('}');
return temp;
}
ParseDataList {
data = ParseData;
temp = []
while Ok(data) {
temp = temp + data;
data = ParseData;
}
}
I suggest that you don't write the lexer by hand - use flex or similar. The task of recognising tokens is not that hard to do by hand, but I don't think you'd gain much.
As others have said, recursive descent parsers are easiest to write by hand. Otherwise you have to maintain the table of state transitions for each token, which isn't really human-readable.
I'm pretty sure ANTLR implements a recursive descent parser anyway: there's a mention of it in an interview about ANTLR 3.0.
I've also found a series of blog posts about writing a parser in C#. It seems quite gentle.
At the risk of offending the OP, writing a parser for a large langauge like some specific vendor's SQL by hand when good parser generator tools (such as ANTLR) are available is simply crazy. You'll spend far more time rewriting your parser by hand than you will fixing the "edge cases" using the parser generator, and you'll invariably have to go back and revise the parser anyway as the SQL standards move or you discover you misunderstood something else. If you don't understand your parsing technology well enough, invest the time to understand it. It won't take you months to figure out how to deal with the edge cases with the parser generator, and you've already admitted it you are willing to spend months doing it by hand.
Having said that, if you are hell-bent on redoing it manually, using a recursive descent parser is the best way to do it by hand.
(NOTE: OP clarified below that he wants a reference implementation. IMHO you should NEVER code a reference implementation of a grammar procedurally, so recursive descent is out.)
There is no "one best" way. Depending on your needs you may want bottom up (LALR1) or recursive descent(LLk). Articles such as this one give personal reasons for prefering LALR(1) (bottom up) to LL(k). However, each type of parser has its benefits and drawbacks. Generally LALR will be faster as a finite-state_machine is generated as a lookup table.
To pick what's right for you examine your situation; familiarize yourself with the tools and technologies. Starting with some LALR and LL Wikipedia articles isn't a bad choice. In both cases you should ALWAYS start with specifying the grammar in BNF or EBNF. I prefer EBNF for its succinctness.
Once you've gotten your mind wrapped around what you want to do, and how to represent it as a grammar, (BNF or EBNF) try a couple of different tools and run them across representative samples of text to be parsed.
Anecdotally:
However, I've heard that LL(k) is more flexible. I've never bothered to find out for myself. From my few parser building experiences I have noticed that regardles if it's LALR or LL(k) the best way to pick what's best for your needs is to start with writing the grammar. I've written my own C++ EBNF RD parser builder template library, used Lex/YACC and had coded a small R-D parser. This was spread over the better part of 15 years, and I spent no more than 2 months on the longest of the three projects.
In C/Unix, the traditional way is to use lex and yacc. With GNU, the equivalent tools are flex and bison. I don't know for Windows/C#.
If I were you I would have another go at ANTLRv3 using the GUI ANTLRWorks which gives you a very convenient way of testing your grammar. We use ANTLR in our project and although the learning curve may be a bit steep in the beginning once you learn it is quite convenient. Also on their email newsletter there are a lot of people who are very helpful.
PS. IIRC they also have a SQL-grammar you could take a look at.
hth
Well if you don't mind using another compiler compiler tool like ANTLR I suggest you take a look at Coco/R
I've used it in the past and it was pretty good...
The reason you don't want a table-driven parser is that you will not be able to create sensible error-messages. That is ok for a generated language, but not one where humans are involved. The error messages produced by c-like language compilers provide ample evidence that people can adapt to anything, no matter how bad.
I would also vote for using an existing parser + lexer.
The only reason I can see in doing it by hand:
if you want something relatively simple (like validating/parsing some input)
to learn/understand the principles.
Look at gplex and gppg, lexer and parser generators for .NET. They work well, and are based on the same (and almost compatible) input as lex and yacc, which is relatively easy to use.
JFlex is a flex implementation for Java, and there is now a C# port of that project http://sourceforge.net/projects/csflex/. There also appears to be a C# port of CUP in progress, which can be found here: http://sourceforge.net/projects/datagraph/
I too would recommend avoiding hand crafting your own solution. I tried this once myself for a very simple language (part of a university project) and it was incredibly time consuming and difficult. It is also exceedingly hard to maintain and change once written.
Using an existing parser generator is the way to go, as the bulk of the hard work has been done and has been well tested over the years.

Categories