I have similar rules for some properties in multiple model objects and I want to replace them with custom property validators to avoid code duplication in unit tests.
I have my property validator:
public class IntIdPropertyValidator: PropertyValidator
{
public IntIdPropertyValidator()
: base("Property {PropertyName} should be greater than 0")
{
}
protected override bool IsValid(PropertyValidatorContext context)
{
var value = (int)context.PropertyValue;
return value > 0;
}
}
And wiring it up in model validator class:
public class SomeRequestValidator : AbstractValidator<CreateWordRequest>
{
public SomeRequestValidator()
{
RuleFor(x => x.Id).SetValidator(new IntIdPropertyValidator());
}
}
Tried to test:
[Test]
public void Validate_IdHasValidator_Success()
{
Init();
validator.ShouldHaveChildValidator(x => x.Id, typeof(IntIdPropertyValidator));
}
But test always fails.
So, how can I test that validator is actually set for property Id?
You are using ShouldHaveChildValidator in the wrong way. Id is a simple type.
ShouldHaveChildValidator is being in used on complex types. (see also the source code)
The right way to test the property is to pass valid objects and invalid objects and then varify using ShouldNotHaveValidationErrorFor and ShouldHaveValidationErrorFor:
[Test]
public void Should_have_error_when_Id_Is_Ilegal() {
validator.ShouldHaveValidationErrorFor(p => p.Id, new CreateWordRequest());
}
[Test]
public void Should_not_have_error_when_Id_Is_Legal() {
validator.ShouldNotHaveValidationErrorFor(p => p.Id, new CreateWordRequest()
{
Id = 7
});
}
Edit
The following code will do the verification you were looking for:
[Test]
public void Validate_IdHasValidator_Success()
{
var validator = new SomeRequestValidator();
var descriptor = validator.CreateDescriptor();
var matchingValidators = descriptor.GetValidatorsForMember(
Extensions.GetMember<CreateWordRequest, int>(x => x.Id).Name);
Assert.That(matchingValidators.FirstOrDefault(), Is.InstanceOf<IntIdPropertyValidator>());
}
I'd like to explain you the reason that you shouldn't use the above code.
When you UT class you verify that the class behavior won't be harmed.
When you create a custom validator, you create a class with a responsibility to verify specific model( --> business rules)...
Id is a simple type with a business rules according to his parent model.
Therefore you need to verify the business rules of Id through the model validator.
Let's assume that one of your models suddenly need to change. In this case you don't have any validation that any of you existing business rules won't harmed(or you decide to make changes inside IntIdPropertyValidator, such a move will affect anywhere, even if you didn't want to).
Creating a custom Property Validator is very good for code maintenance however, the tests should be against the model validator.
On complex types the story is quite different:
Usually complex types has their own business rules. In this case, you have to create a custom validator for them, and then verify that the parent validator use the right validator. Another thing to verify is: If the complex type is Null or complex rules such as "when the property value is X and then complex type state is Y"...
Related
I've got the following DTO:
public class SomethingRequest {
public string Code { get; set; }
}
Code must be unique, so I've created a validator that checks if there is already a record with provided code, like the following
public class SomethingValidator: AbstractValidator<SomethingRequest>
{
public SomethingValidator(ISomethingRepository repo) {
RuleFor(something => something.Code).Must(BeUnique);
}
private bool BeUnique(string code) { ... uniqueness check... }
}
As I'm using validation feature, the validator is automatically wired for all methods with SomethingRequest, which is really great.
When condition fails I would like to return 409 Conflict HTTP status code, but 400 Bad Request is always returned.
So, the questions are:
Am I misusing vaidation feature? (i.e. autowired validators were not designed to be used for application logic checks)
If I'm not, are there any ways to override 400 BadRequest status code from validator?
Am I misusing validation feature? (i.e. autowired validators were not designed to be used for application logic checks)
I would say this is best done in the business logic away from the validation, because checking for a uniqueness is actually a verification check rather than validation, because it requires checking against a data source. My answer on this question addresses a similar concern.
While you can override the response status code of the validation error using the ErrorResponseFilter, I would recommend creating your own request filter for this business logic, as overriding the response there will be messy as your application grows, and again, it's not really validation.
Using a filter attribute is straightforward in ServiceStack:
public class VerifySomethingCodeAttribute : Attribute, IHasRequestFilter
{
IHasRequestFilter IHasRequestFilter.Copy()
{
return this;
}
public int Priority { get { return int.MinValue; } }
public void RequestFilter(IRequest req, IResponse res, object requestDto)
{
SomethingRequest somethingRequestDto = requestDto as SomethingRequest;
if(somethingRequestDto == null)
return;
// Verify the code
// Replace with suitable logic
// If you need the database your wire it up from the IoC
// i.e. HostContext.TryResolve<IDbConnectionFactory>();
bool isUnique = ...
if(!isUnique)
throw HttpError.Conflict("This record already exists");
}
}
Then simply annotate the DTO:
[VerifySomethingCode]
public class SomethingRequest {
public string Code { get; set; }
}
Then you can be sure that the Code in the DTO will have been verified as unique and you can return any status and response you want. The filter gives you total control.
Hope this helps.
1) Although it allows dependency injection and wiring up of repositories, the fluent validation code isn't the place you are supposed to put this kind of code as it is more along the lines of verification code. This answer has a good explanation of the differences between the two. I'll just add that it also makes sense for splitting the verification up from validation if only for more easily returning the appropriate status code.
2)If you would like to override the 400 BadRequest status code, you can use the validation feature's ErrorResponseFilter like so:
Plugins.Add(new ValidationFeature
{
ErrorResponseFilter = CustomValidationError
});
...
private object CustomValidationError(ValidationResult validationResult, object errorDto)
{
var firstError = validationResult.Errors.First();
return new HttpError(HttpStatusCode.Conflict, firstError.ErrorCode, firstError.ErrorMessage);
}
This filter looks to be intended for a global solution as it doesn't appear to give you any easy way to determine the dto/service the error came from. I would suggest looking at making the change in 1 instead.
The basic issue is how to test a presenter.
Take:
Domain object (will eventually be persisted to DB)
Base attributes are Id (DB ID, Int/GUID/Whatever) and TransientID (Local ID until saved, GUID)
DomainObject
namespace domain {
public class DomainObject {
private int _id;
private Guid transientId;
public DomainObject()
{
_transient_Id = Guid.NewGuid();
}
}
}
PresenterTest:
var repository = Mock.StrictMock();
var view = Mock.StrictMock();
view.Save += null;
var saveEvent = LastCall.Ignore().GetEventRaiser();
var domainObject = new DomainObject() {Id = 0, Attribute = "Blah"};
Mock.ExpectCall(Repository.Save(domainObject)).Returns(True);
Mock.ReplayAll();
var sut = new Presenter(repository, view);
Save_Event.raise(view, EventArgs.Empty);
Mock.Verify()
So the problem here is that the domain object identity is calculated with ID and failing that it's calculated with transientID, there's no way to know what the transientID will be so I can't have the mock repository check for equality.
The workarounds so far are:
1) LastCall.Ignore and content myself with jsut testing that the method got called but not test the content of the call.
2) Write a DTO to test against and save to a service. The service than handles the mapping to domain.
3) Write a fake testRepository that uses custom logic to determine success.
--1 doesn't test the majority of the logic. --2 is a lot of extra code for no good purpose --3 Seems potentially brittle.
Right now I'm leaning towards DTO's and a service in the theory that it gives the greatest isolation between tiers but is probably 75% unnecessary...
there's no way to know what the transientID will be so I can't have the mock repository check for equality.
Actually, I think there is an opportunity here.
Instead of calling Guid.NewGuid(), you could create your own GuidFactory class that generates GUIDs. By default, it would use Guid.NewGuid() internally, but you could take control of it for tests.
public static class GuidFactory
{
static Func<Guid> _strategy = () => Guid.NewGuid();
public static Guid Build()
{
return _strategy();
}
public static void SetStrategy(Func<Guid> strategy)
{
_strategy = strategy;
}
}
In your constructor, you replace Guid.NewGuid() with GuidFactory.Build().
In your test setup, you override the strategy to suit your needs -- return a known Guid that you can use elsewhere in your tests or just output the default result to a field.
For example:
public class PseudoTest
{
IList<Guid> GeneratedGuids = new List<Guid>();
public void SetUpTest()
{
GuidFactory.SetStrategy(() =>
{
var result = Guid.NewGuid();
GeneratedGuids.Add(result);
return result;
});
}
public void Test()
{
systemUnderTest.DoSomething();
Assert.AreEqual(GeneratedGuids.Last(), someOtherGuid);
}
}
WPF has helped me realize that you really don't need to do much testing if any on the Controller/Presenter/VM. You really should focus all of your tests on the models and services you use. All business logic should be there, the view model or presenter or controller should be as light as possible, and only role is to transfer back and forth between the model and the view.
What's the point of testing whether you call a service when a button command makes it to the presenter? Or testing whether an event is wired properly?
Don't get me wrong, I still have a very small test fixture for the view models or controllers but really the focus of tests should be on the models, let the integration tests test the success of the view and the presenter.
Skinny controllers/VMs/Presenters.
Fat Models.
This is my answer because I ran into the same issue trying to test viewmodels, I wasted so much time trying to figure out how best to test them and another developer gave a great talk on Model-View patterns with this argument. Don't spend too much time making tests for these, focus on the models/services.
I have a class which has more information then my inteface. It has a property which I did not expose in my interface.
public interface IViewResolver
{
object GetViewFor(string viewName);
}
I want now to implement a MefViewResolver based on that interface.
public class ViewResolver : IViewResolver
{
[ImportMany]
public IEnumerable<Lazy<IView,IViewMetaData>> Views { get; set; }
public object GetViewFor(string viewName)
{
var view = Views.Where(x => x.Metadata.Name == viewName).FirstOrDefault();
return view == null ? null : view.Value;
}
}
My SUT gets a IResolver per constructor injection loaded with my mefViewResolver. In my unit test I would like to pre-set my Views property from the outside without using mef or being mef specific in my interface.
Basically I want to set the Views with an expected value and see if my viewmodel which uses the IViewResolver returns the preset view...
How can I stub the views property even if it does not exists on my interface...
If I am on the wrong path... any corrections would much appriciated..
Thanks D.
If you want to test your ViewModel (and not the Resolver), which is only aware of the IViewResolver interface, you shouldn't have any problem: the only method (according to the code provided) that the ViewModel can access is GetViewFor. All you need to do is return the appropriate View for each test case, given the View name. In RhinoMocks it should be something like:
// Arrange the test objects
var viewResolverMock = MockRepository.GenerateMock<IViewResolver>();
viewResolverMock.Stub(x => x. GetViewFor(thisTestViewName).Return(thisTestView);
var myViewModel = new MyViewModel(viewResolverMock);
// Do the actual operation on your tested object (the view model)
var actualResult = myViewModel.DoSomethingWithTheView();
// Assert
AssertAreEqual(expectedResult, actualResult);
I'm new to DDD, and I'm trying to apply it in real life. There is no questions about such validation logic, as null check, empty strings check, etc - that goes directly to entity constructor/property. But where to put validation of some global rules like 'Unique user name'?
So, we have entity User
public class User : IAggregateRoot
{
private string _name;
public string Name
{
get { return _name; }
set { _name = value; }
}
// other data and behavior
}
And repository for users
public interface IUserRepository : IRepository<User>
{
User FindByName(string name);
}
Options are:
Inject repository to entity
Inject repository to factory
Create operation on domain service
???
And each option more detailed:
1 .Inject repository to entity
I can query repository in entities constructor/property. But I think that keeping reference to repository in entity is a bad smell.
public User(IUserRepository repository)
{
_repository = repository;
}
public string Name
{
get { return _name; }
set
{
if (_repository.FindByName(value) != null)
throw new UserAlreadyExistsException();
_name = value;
}
}
Update: We can use DI to hide dependency between User and IUserRepository via Specification object.
2. Inject repository to factory
I can put this verification logic in UserFactory. But what if we want to change name of already existing user?
3. Create operation on domain service
I can create domain service for creating and editing users. But someone can directly edit name of user without calling that service...
public class AdministrationService
{
private IUserRepository _userRepository;
public AdministrationService(IUserRepository userRepository)
{
_userRepository = userRepository;
}
public void RenameUser(string oldName, string newName)
{
if (_userRepository.FindByName(newName) != null)
throw new UserAlreadyExistException();
User user = _userRepository.FindByName(oldName);
user.Name = newName;
_userRepository.Save(user);
}
}
4. ???
Where do you put global validation logic for entities?
Thanks!
Most of the times it is best to place these kind of rules in Specification objects.
You can place these Specifications in your domain packages, so anybody using your domain package has access to them. Using a specification, you can bundle your business rules with your entities, without creating difficult-to-read entities with undesired dependencies on services and repositories. If needed, you can inject dependencies on services or repositories into a specification.
Depending on the context, you can build different validators using the specification objects.
Main concern of entities should be keeping track of business state - that's enough of a responsibility and they shouldn't be concerned with validation.
Example
public class User
{
public string Id { get; set; }
public string Name { get; set; }
}
Two specifications:
public class IdNotEmptySpecification : ISpecification<User>
{
public bool IsSatisfiedBy(User subject)
{
return !string.IsNullOrEmpty(subject.Id);
}
}
public class NameNotTakenSpecification : ISpecification<User>
{
// omitted code to set service; better use DI
private Service.IUserNameService UserNameService { get; set; }
public bool IsSatisfiedBy(User subject)
{
return UserNameService.NameIsAvailable(subject.Name);
}
}
And a validator:
public class UserPersistenceValidator : IValidator<User>
{
private readonly IList<ISpecification<User>> Rules =
new List<ISpecification<User>>
{
new IdNotEmptySpecification(),
new NameNotEmptySpecification(),
new NameNotTakenSpecification()
// and more ... better use DI to fill this list
};
public bool IsValid(User entity)
{
return BrokenRules(entity).Count() == 0;
}
public IEnumerable<string> BrokenRules(User entity)
{
return Rules.Where(rule => !rule.IsSatisfiedBy(entity))
.Select(rule => GetMessageForBrokenRule(rule));
}
// ...
}
For completeness, the interfaces:
public interface IValidator<T>
{
bool IsValid(T entity);
IEnumerable<string> BrokenRules(T entity);
}
public interface ISpecification<T>
{
bool IsSatisfiedBy(T subject);
}
Notes
I think Vijay Patel's earlier answer is in the right direction, but I feel it's a bit off. He suggests that the user entity depends on the specification, where I belief that this should be the other way around. This way, you can let the specification depend on services, repositories and context in general, without making your entity depend on them through a specification dependency.
References
A related question with a good answer with example: Validation in a Domain Driven Design.
Eric Evans describes the use of the specification pattern for validation, selection and object construction in chapter 9, pp 145.
This article on the specification pattern with an application in .Net might be of interest to you.
I would not recommend disallowing to change properties in entity, if it's a user input.
For example, if validation did not pass, you can still use the instance to display it in user interface with validation results, allowing user to correct the error.
Jimmy Nilsson in his "Applying Domain-Driven Design and Patterns" recommends to validate for a particular operation, not just for persisting. While an entity could be successfully persisted, the real validation occurs when an entity is about to change it's state, for example 'Ordered' state changes to 'Purchased'.
While creating, the instance must be valid-for-saving, which involves checking for uniqueness. It's different from valid-for-ordering, where not only uniqueness must be checked, but also, for example, creditability of a client, and availability at the store.
So, validation logic should not be invoked on a property assignments, it should be invoked upon aggregate level operations, whether they are persistent or not.
Edit: Judging from the other answers, the correct name for such a 'domain service' is specification. I've updated my answer to reflect this, including a more detailed code sample.
I'd go with option 3; create a domain service specification which encapsulates the actual logic that performs the validation. For example, the specification initially calls a repository, but you could replace it with a web service call at a later stage. Having all that logic behind an abstract specification will keep the overall design more flexible.
To prevent someone from editing the name without validating it, make the specification a required aspect of editing the name. You can achieve this by changing the API of your entity to something like this:
public class User
{
public string Name { get; private set; }
public void SetName(string name, ISpecification<User, string> specification)
{
// Insert basic null validation here.
if (!specification.IsSatisfiedBy(this, name))
{
// Throw some validation exception.
}
this.Name = name;
}
}
public interface ISpecification<TType, TValue>
{
bool IsSatisfiedBy(TType obj, TValue value);
}
public class UniqueUserNameSpecification : ISpecification<User, string>
{
private IUserRepository repository;
public UniqueUserNameSpecification(IUserRepository repository)
{
this.repository = repository;
}
public bool IsSatisfiedBy(User obj, string value)
{
if (value == obj.Name)
{
return true;
}
// Use this.repository for further validation of the name.
}
}
Your calling code would look something like this:
var userRepository = IoC.Resolve<IUserRepository>();
var specification = new UniqueUserNameSpecification(userRepository);
user.SetName("John", specification);
And of course, you can mock ISpecification in your unit tests for easier testing.
I’m not an expert on DDD but I have asked myself the same questions and this is what I came up with:
Validation logic should normally go into the constructor/factory and setters. This way you guarantee that you always have valid domain objects. But if the validation involves database queries that impact your performance, an efficient implementation requires a different design.
(1) Injecting Entities: Injecting entities can be technical difficult and also makes managing application performance very hard due to the fragmentation of you database logic. Seemingly simple operations can now have an unexpectedly performance impact. It also makes it impossible to optimize your domain object for operations on groups of the same kind of entities, you no longer can write a single group query, and instead you always have individual queries for each entity.
(2) Injecting repository: You should not put any business logic in repositories. Keep repositories simple and focused. They should act as if they were collections and only contain logic for adding, removing and finding objects (some even spinoff the find methods to other objects).
(3) Domain service This seems the most logical place to handle the validation that requires database querying. A good implementation would make the constructor/factory and setters involved package private, so that the entities can only be created / modified with the domain service.
I would use a Specification to encapsulate the rule. You can then call when the UserName property is updated (or from anywhere else that might need it):
public class UniqueUserNameSpecification : ISpecification
{
public bool IsSatisifiedBy(User user)
{
// Check if the username is unique here
}
}
public class User
{
string _Name;
UniqueUserNameSpecification _UniqueUserNameSpecification; // You decide how this is injected
public string Name
{
get { return _Name; }
set
{
if (_UniqueUserNameSpecification.IsSatisifiedBy(this))
{
_Name = value;
}
else
{
// Execute your custom warning here
}
}
}
}
It won't matter if another developer tries to modify User.Name directly, because the rule will always execute.
Find out more here
In my CQRS Framework, every Command Handler class also contains a ValidateCommand method, which then calls the appropriate business/validation logic in the Domain (mostly implemented as Entity methods or Entity static methods).
So the caller would do like so:
if (cmdService.ValidateCommand(myCommand) == ValidationResult.OK)
{
// Now we can assume there will be no business reason to reject
// the command
cmdService.ExecuteCommand(myCommand); // Async
}
Every specialized Command Handler contains the wrapper logic, for instance:
public ValidationResult ValidateCommand(MakeCustomerGold command)
{
var result = new ValidationResult();
if (Customer.CanMakeGold(command.CustomerId))
{
// "OK" logic here
} else {
// "Not OK" logic here
}
}
The ExecuteCommand method of the command handler will then call the ValidateCommand() again, so even if the client didn't bother, nothing will happen in the Domain that is not supposed to.
in short you have 4 options:
IsValid method: transition an entity to a state (potentially invalid) and ask it to validate itself.
Validation in application services.
TryExecute pattern.
Execute / CanExecute pattern.
read more here
Create a method, for example, called IsUserNameValid() and make that accessible from everywhere. I would put it in the user service myself. Doing this will not limit you when future changes arise. It keeps the validation code in one place (implementation), and other code that depends on it will not have to change if the validation changes You may find that you need to call this from multiple places later on, such as the ui for visual indication without having to resort to exception handling. The service layer for correct operations, and the repository (cache, db, etc.) layer to ensure that stored items are valid.
I like option 3. Simplest implementation could look so:
public interface IUser
{
string Name { get; }
bool IsNew { get; }
}
public class User : IUser
{
public string Name { get; private set; }
public bool IsNew { get; private set; }
}
public class UserService : IUserService
{
public void ValidateUser(IUser user)
{
var repository = RepositoryFactory.GetUserRepository(); // use IoC if needed
if (user.IsNew && repository.UserExists(user.Name))
throw new ValidationException("Username already exists");
}
}
Create domain service
Or I can create domain service for
creating and editing users. But
someone can directly edit name of user
without calling that service...
If you properly designed your entities this should not be an issue.
How would you implement validation for entity framework entities when different validation logic should be applied in certain situations?
For example, validate the entity in one way if the user is an admin, otherwise validate in a different way.
I put validation attributes on context-specific, dedicated edit models.
The entity has only validations which apply to all entities.
Before I start talking about how to do this with VAB, let me say that you will have to really think your validation rules over. While differentiating validations between roles is possible, it does mean that the object that a user in one roles saves, is invalid for another user. This means that a user in a certain role might need to change that object before it can save it. This can also happen for the same user when it is promoted to another role. If you're sure about doing this, please read on.
This seems like a good job for Enterprise Library's Validation Application Block (VAB), since it allows validation of these complex scenarios. When you want to do this, forget attribute based validation; it simply won't work. You need configuration based validation for this to work.
What you can do using VAB is using a configuration file that holds the actual validation. It depends a bit on what the actual validation rules should be, but what you can do is create a base configuration that always holds for every object in your domain. And next create one or more configurations that contain only the extended validations. Say, for instance, that you've got a validation_base.config file, a validation_manager.config and a validation_admin.config file.
What you can do is merge those validations together depending on the role of the user. Look for instance at this example that creates three configuration sources, based on the configuration file:
var base = new FileConfigurationSource("validation_base.config");
var mngr = new FileConfigurationSource("validation_manager.config");
var admn = new FileConfigurationSource("validation_admin.config");
Now you have to merge these files into (at least) two configurations. One containing the base + manager and the other that contains the base + admin rules. While merging is not something that is supported out of the box, this article will show you how to do it. When using the code in that article, you will be able to do this:
var managerValidations =
new ValidationConfigurationSourceCombiner(base, mngr);
var adminValidations =
new ValidationConfigurationSourceCombiner(base, admn);
The last thing you need to do is to wrap these validations in a class that return the proper set based on the role of the user. You can that like this:
public class RoleConfigurationSource : IConfigurationSource
{
private IConfigurationSource base;
private IConfigurationSource managerValidations;
private IConfigurationSource adminValidations;
public RoleConfigurationSource()
{
this.base = new FileConfigurationSource("validation_base.config");
var mngr = new FileConfigurationSource("validation_manager.config");
var admn = new FileConfigurationSource("validation_admin.config");
managerValidations =
new ValidationConfigurationSourceCombiner(base, mngr);
adminValidations =
new ValidationConfigurationSourceCombiner(base, admn);
}
public ConfigurationSection GetSection(string sectionName)
{
if (sectionName == ValidationSettings.SectionName)
{
if (Roles.UserIsInRole("admin"))
{
return this.adminValidations;
}
else
{
return this.managerValidations;
}
}
return null;
}
#region IConfigurationSource Members
// Rest of the IConfigurationSource members left out.
// Just implement them by throwing an exception from
// their bodies; they are not used.
#endregion
}
Now this RoleConfigurationSource can be created once and you can supply it when you validate your objects, as follows:
static readonly IConfigurationSource validationConfiguration =
new RoleConfigurationSource();
Validator customerValidator =
ValidationFactory.CreateValidator<Customer>(validationConfiguration);
ValidationResults results = customerValidator.Validate(customer);
if (!results.IsValid)
{
throw new InvalidOperationException(results[0].Message);
}
Please note that the Validation Application Block is not an easy framework. It take some time to learn it. When your application is big enough, your specific requirements however, will justify its use. If you choose the VAB, start by reading the "Hands-On Labs" document. If you have problems, come back here at SO ;-)
Good luck.
Until I hear a brighter idea, I'm doing this:
public partial class MyObjectContext
{
ValidationContext ValidationContext { get; set; }
partial void OnContextCreated()
{
SavingChanges += new EventHandler(EntitySavingChanges);
}
private void EntitySavingChanges(object sender, EventArgs e)
{
ObjectStateManager
.GetObjectStateEntries(EntityState.Added | EntityState.Modified | EntityState.Deleted)
.Where(entry => entry.Entity is IValidatable).ToList().ForEach(entry =>
{
var entity = entry.Entity as IValidatable;
entity.Validate(entry, ValidationContext);
});
}
}
interface IValidatable
{
void Validate(ObjectStateEntry entry, ValidationContext context);
}
public enum ValidationContext
{
Admin,
SomeOtherContext
}
public partial class MyEntity : IValidatable
{
public ValidationContext ValidationContext { get; set; }
public void Validate(ObjectStateEntry entry, ValidationContext context)
{
// this validation doesn't apply to admins
if (context != ValidationContext.Admin)
{
// validation logic here
}
}
}