Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
So, before I post my question, I will add a little bit of premise to it. I have written quite some code for academic purposes, but never before was it for production or an actual client.
What I would always do is this:
private void button1_Click(object sender, EventArgs e)
{
//Do all the programming here
}
However, now when I have to build actual software for client (a small one), I find this process tedious and hard to handle as the code grows long.
I still create separate classes and do some of the work here and there, but I think it's not the correct direction.
What am I missing? How professional developers do it?
Thanks.
EDIT : This is not exactly a coding question however I still choose StackOverflow because I really want the different perspectives of the superb professionals present here. I am just an industry newbie, so I really need to start learning in the right direction.
I find this process tedious and hard to handle as the code grows long.
You are correct.
I still create separate classes and do some of the work here and there, but I think it's not the correct direction.
It is the correct direction. Programming is about abstraction. Properties, methods, handlers, classes, and so on, are all abstractions. Abstractions are useful because they present less complexity than their implementation details, and can therefore be understood and used effectively. Just as you do not learn how to drive by manipulating valves and cylinders and springs and camshafts; you learn abstractions like brakes and gear selectors.
When you learn to drive you are handed a pile of abstractions which you must learn to use. When you are programming you are both handed a pile of existing abstractions -- variables, lists, types, and so on, are all abstractions -- but you are also expected to build your own.
How do professional developers do it?
This is not a site to teach you how to program. This is a site for specific questions about actual code. Professional developers do it by spending thousands of hours learning from others and practicing their craft; go get started! Come on back when you have a specific question about actual code.
I think you are doing it right ! Indeed managing large programs is a hard job, you have to know how to modulate your code. You can create diagram representing your project so it will make the editing a lot easier.
Here is a link with some tools for Architecture and Modeling with Visual Studio
https://www.youtube.com/watch?v=ThEP7DgVAC0
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Of course, it does depend on the application in question. If I am using, for example, a grammar checker to check for mistakes to make the code more readable, I don't think that that is a bad practice (though tell me if it is).
But I am thinking about bigger extensions like Resharper that adds so much, with me not even knowing 95% of what it does.
My big question is: is it a bad practice to use Resharper or similar applications that I mostly don't understand (while the few bits I do understand does help me), while I don't even know how most of the basic Visual Studios application works?
A productivity tool (like R# or others) is supposed to enhance your productivity.
That means you should be able to do your job, just do it faster (or cheaper or whatever other metric you use) with the tool.
If you catch yourself not being able to do the job without the tool, because you don't understand what the tool does or cannot replicate it without the tool, that is a problem.
Just keep in mind that a tool can vanish for any reason at any time. Your employer may not want to pay for it, may not like it, use a different product or maybe the product does not support your preferred environment anymore or simply has bugs. You cannot tell an employer that you cannot do something because a $100 tool broke when you are paid $100K. It's acceptable that you take longer, but not that you have to give up.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
First, please no spamming because I am not necessarily an OOP devotee. That said, I have been a programmer on and off for almost 30 years and have created a lot of pretty cool production code systems/solutions in several industries. I've also done my share of break/fix, database development, etc. Even a bout 10 years as a web programmer, not developer, so I an not so much a newbie but someone trying to get an answer about something that frankly is eluding me.
I started as a "C" programmer int he early 1980's and "C" served me well into the early 2000s (even today most scripting and higher level languages use "C" syntactical elements).
That said, overloading seems to violate every principle of what I was taught were "good coding practices" by increasing ambiguity in the opportunity for omission of intended code to be executed for a given condition or actually running a routine you didn't expect to due to some condition falling through the cracks. Also generally seems to creates LOTS of confusion for learners.
I am not saying overloading is bad per se, I just want to better understand it's practical application to real problems other than simply a way to provide input validation or perhaps just to handle inputs from other sources that you have no control over in an API or something else that you don't necessarily know the type of (again not clear on how or why that could actually happen either) C# has a lot of parse and try catch to handle this as do most OOP languages.
In over a decade, I have yet to get a straight, non judgmental and dare I say unsnarky answer to this question. Surely there is someone who can offer a reasonable explanation of why it is used.
So I pose the question to you the stack overflow gurus, Personally, does having a method/function that is potentially callable multiple different ways with multiple exclusive code segments really a good thing, or does it just suggest lack of good planning when designing software. Again, not knocking, judging, or disparaging, I just don't get it.....please enlighten me!
I'd say std::to_string is a pretty good example of good use of overloading. Why would you want to have different functions for converting different types to std::string? You don't. You just want one - std::to_string and you want it to behave sensibly whatever type of argument you give it - and it does just that. Using overloading keeps the client code simple.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm about to develop some LOB applications using VS2012, WPF, Sql Server Express 2012, Unity, Prism.
I don't have legacy applications to care about.
Is it ok if I choose the Model First workflow for my upcoming projects, are there some important benefits in the Code First workflow that I would not be receiving?
If there's any that I could not overlook, then could I start with Model First and then switch to Code First?, it happens that I'm more confortable designing databases with the designer than by code, this is the main reason for this question.
If you're more comfortable working with databases first, I would go down that route. This question has a lot of pros/cons for each.
I've recently used code first for a project and I regret that decision. Although it is incredibly powerful, it was an unnecessary learning curve and ultimately took far too long to setup a simple schema.
If you want to learn how code first works, and time isn't an issue, then you may as well go for it. Else, what do you really have to gain from it?
Ultimately though, if you're developing it and you already have a sufficient skill set in one of these, use it.
I have created WPF applications using code first and MVVM patterns + DI (though not Prism).
It took a while to convince me to move away from the edmx models, but I've found Code First to be a much cleaner approach, with no apparent downsides.
I think you could easily move to model first from code first, though you probably wont need to. I haven't tried it - you might need automapper.
I have successfully taken existing dbs and moved over to CF though it is a bit messier.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am looking for honest / constructive feedback.
I hear a lot of my peers who have been using .NET for a while now, say how easily they built their GUI interfaces. On closer inspection they have used 3rd party tools such as Infragistics.
As a new .NET programmer (certified I may add), I wanted to know if anyone has actually created interfaces using nothing but what ever happens to be available by default with the framework...
I am guessing it shouldn't be too difficult to create a good, aethestic looking GUI without using 3rd party addons.
Yes We've done it (windows).
Depends on where you put the emphasis in your guess. No its not TOO difficult, but it's definitely not easy, unless your requirements are truly trivial, as opposed to apparently trivial.
All depends on what you need / want to do. My advice don't tell your boss, this will be easy, well not unless you want help getting out of the door for the last time.
For instance take a straight textbox.
They want to enter currency in it.
Multiple rounding algorithms.
Enter raw value display formatted, Currency symbol, thousand
separators.
Optional pounds or pence.
Optional blank or zero
Optional treatment of negatives.
Optional display formatting of negatives.
Alignment on decimal point.
Auto change of font on resizing.
And break none of the standard behaviours.
Trust me not simple at all. Especially if you do something Infragistics did not, and go for a good developer interface as well as the end user behaviours.
Not trying to put you off. It's challenging and rewarding, but when you have the entire application stuck behind some irritating bug in the UI, bosses lose patience real quick and you haven't got that get out of jail free card in shrugging and saying that's how X works.
NB just buying a suite won't fix all these problems, you can spend a lot of time producing a totally crap UI with them as well, just you don't have to write the code...
The answer to that is a lot of hard work. :(
Can your current suite be upgraded?
If you have the source could it be fixed, if you had the source and it's been twiddled with, are those "improvements" interfering?
Needs some hard-headed realistic analysis this. Which components are broke? How much are they used? How much of the extra behaviour in the suite do you really need.
Most important, how good is your separation of concerns in the current code, and how comprehensive both unit tests and automation tests.
Would compatibility mode sort it out?
Need to get to a point where the number of questions doesn't significantly out weigh the number of answers.
I've been where you are though it was another suite in another environment. The people looking for the cheap, quick and painless way of dealing with a mess like this were hugely disappointed, but it can be attacked in parts as long as everybody takes a heavy dose of pragmatism.
As a for instance,
Someone had bought a windows component that looked like a html link, and was heavily dependent on File associations and API calls. It was very visible and all over the place, I knocked up a much better and far less fragile one in a few days, swapped it in, a lot of perceived problems disappeared, confidence increased, and the remaining problems started to look less horrible.
Think of it like going into triage mode on bugs at the end of a struggling release.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've been working for a couple of years on a small project, almost by myself, with the eventual help of some colleagues. The project is getting out of my hands, because the size of the code is growing (around 20K lines now) and the initial expectations I had for it have outgrown my own ability and time. So now I want to open source it, with the hope to attract some contributors. My motivations for going open source are these:
The project is rather academic (a library of algorithms for scientific computing), and I don't really have any economic interest in it.
The project is getting too big for me to handle it by myself, and the number of features I've planned are enough to keep a small team motivated (I think).
It needs a lot of testing, not just unit testing, but testing in the real world to see if the API is easy to use, the performance is as expected, etc.
I'm sure it has a lot of bugs, but I can only find a few, since its me alone testing it.
It needs proper documentation, because the API is getting a bit complex.
Other than that, I think that the project could benefit from a comunity in terms of deciding which features are most needed, and creating a set of guidelines for the future development.
I'm using Git, so my first thought was to publish it on Github and/or Codeplex. Besides that, what would be the steps to help to slowly grow a community of users and perhaps developers around it? Do I need a domain of my own, or should I stick to Github/Codeplex? How do I set up a platform for collaboration between developers potentially geographically separated? Should I set up a mailing list? And most important, how do I attract people to use it and collaborate with it?
The project is a .NET library for optimization and machine learning, written in C#.
There is only one piece of advice I can give here; use Github. It is common, (pretty much) everyone knows about it, it is easy to use, and the community who you are trying to attract is already on it. It has a ton of tools which you may not have even thought about, but may come in handy. It it pretty much the perfect solution for what you're looking to do, so don't overthink it.
As for attracting people to use it and contribute, if it is something that is useful and good, people will find it. I have found a ton of obscure projects with a simple google. If someone googles for something related to your project (and it is appropriate named and such) they will likely find it. There isn't really much you can do to force a demand though, just let it happen. As for contributors, people who are using it will likely contribute they're additions back. Just be sure to stay actively involved in managing it (monitoring pull requests, etc). If no one is accepting requests or managing versions, contributors will likely start to give up on your project.