윈도우 앱개발을 향하여

블로그 이미지
윈도우 10 스토어에 앱을 개발해 올리는 것을 목표로 하고 있습니다. 비전공자가 독학으로 시도하는 일이어서 얼마나 걸릴지 모르겠지만... 아무튼 목표는 그렇습니다!!
by 코딩하는 경제학도
  • Total hit
  • Today hit
  • Yesterday hit

'Programming'에 해당되는 글 45건

  1. 2018.03.30
    (Json.NET) Performance Tips
  2. 2018.03.30
    (Json.NET) Custom Serialization
  3. 2018.03.30
    (Json.NET) Settings & Attributes
  4. 2018.03.30
    (Json.NET) Serialization Fundamentals
  5. 2018.02.26
    (Modern Software Architecture) Designing Software Driven by the Domain
  6. 2018.02.25
    (Modern Software Architecture) Event Sourcing
  7. 2018.02.24
    (Modern Software Architecture) The CQRS Supporting Architecture
  8. 2018.02.22
    (Modern Software Architecture) The "Domain Model" Supporting Architecture
  9. 2018.02.14
    (Modern Software Architecture) The DDD Layered Architecture
  10. 2018.02.13
    (Modern Software Architecture) Domain-Driven Design (DDD)

Copyright

이 모든 내용은 Pluralsight에 Xavier Morera가 올린 'Getting Started with JSON in C# Using Json.NET'라는 강의의 다섯번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/json-csharp-jsondotnet-getting-started/table-of-contents).


Content

Serialization Fundamentals

Settings & Attributes

Custom Serialization

Performance Tips

LINQ to JSON

JSON & XML

Binary JSON (BSON)

Json.NET Schema


Outline

Manual Serialization/Deserialization

Fragments

Populate Objects

Merge Array Handling

Attributes

Memory Usage



Serializing and deserializing manually

Json.NET is extremely fast but it relies on reflection so if speed is key, serialize/deserialize manually using JsonTextReader/Writer. (Avoids using Reflection, Less memory usage, Fastest way of reading and writing JSON)


Writing

var builder = new StringBuilder(); //Never to use strings(immutable) if you're going to be changing them too much

StringWriter writer = new StringWriter(builder);

using (var jsonWriter = new JsonWriter(writer))

{

jsonWriter.Formatting = Formatting.Indented;

jsonWriter.WriteStartArray();

for (int i =0; i < numberOfView; i++)

{

jsonWriter.WriteStartObject();

....

jsonWriter.WriteEndObject();

}

jsonWriter.WriteEndArray();

}


Reading

var reader = new JsonTextReader(new StringReader()jsonStrings);

while(reader.Read())

{

if(reader.Value != null)

{

if(reader.TokenType == JsonToken.String....)

}

}



JSON Fragments (Deserialize only what you need!)

Large JSON documents or objects may take a lot of time serializing and deserializing. However, in certain scenarios you may have a very big JSON object, but you're only interested in a specific subsection of your data. With Json.NET it is possible to extract and work with a fragment of a JSON object using Linq.


Json.NET has the capability of extracting a subsection of a big JSON text, allowing you to deserialize only that small section. This is very beneficial both in terms of performance and simplicity because you're not deserializing the entire object, that's the first one, and second, it makes the code much more readable.


List<UserInteraction> userLogs = GetTestData(); //Generate huge data

string bigLog = JsonConvert.SerializeObject(userLogs);


JArray logs = JArray.Parse(bigLog); //bigLog is huge json string

List<CourseView> courses = new List<CourseView>(); //CourseView is an objects which we are interest in bigLog 

foreach (JObject logEntry in logs)

{

courses.Add(logEntry["courseView"].ToObject<CourseView>()); //Using Linq to json to extract CourseView

}


Remember that readable and simple code is much better than code that's complex and difficult to read. By using JSON fragments it makes your code much more readable and easier to maintain.



PopulateObject

With JSON fragments you're able to extract and work with sections of a large JSON object. PopulateObject is the opposite functionality where you're able to write specific values to a large JSON object.


List<UserInteraction> userLogs = GetTestData(); //Generate huge data


string jsonReviewed = "{'reviewed' : true. 'processedBy' : [''ReviewerProcess'], 'reviewedDate : '" + DateTime.Now.ToUniversalTime().ToString("yyyy-mm-ddTHH:mm:ssK") + @"' }";


foreach(UserInteraction log in userLogs)

{

JsonConvert.PopulateObject(jsonReviewed, log);

}



JSON Merge (Merge JSON Objects) (Skip)

There are cases where you have two JSON objects that you need to merge. For example, we might have a large JSON array where each item might need to read an object from a different data source. This can be a complex operation or not. Json.NET provides the merge functionality from one object to another. Logic is very simple. Name value pairs are copied across, skipping nulls if the existing property is not null and you can even specify how to handle arrays via merge array handling. You have four options for arrays, concat, union, replace, and merge.



Attributes for performance (Serialize and Deserialize Only What You Need) (Skip)

JsonIgnore Attributes on class



Optimizing Memory Usage (Avoid Large Object Heap, Use Streams)

Memory usage is critical to performance, but more than that, it can lead to exceptions. Any object over 85 kilobytes in size goes directly to a large object heap and this can mean a few out of memory exception. This number(85) is a threshold that was obtained after multiple tests by Microsoft. The problem with the large object heap as opposed to heap generations 0, 1, and 2, is that it's not compacted and even though there are some improvements in .NET Framework 4.5 and up, the promise that your memory usage may grow and grow until your application can be in trouble. Json.NET helps you improve memory usage by using streams instead of loading entire strings into memory. It achieves this by reading from a client in a synchronous way, reading one piece at a time a large JSON.


using (StreamReader streamReader = new StreamReader(stream))

{

using (JsonReader jsonReader = new JsonTextReader(streamReader))

{

var jsonSerializer = new JsonSerializer();

List<UserInteraction> logsStream = jsonSerializer.Deserialize<List<UserInteraction>>(jsonReader);


//The key here is that we're loading the text in an asynchronous way

}

}


출처

이 모든 내용은 Pluralsight에 Xavier Morera가 올린 'Getting Started with JSON in C# Using Json.NET'라는 강의의 다섯번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/json-csharp-jsondotnet-getting-started/table-of-contents). 제가 정리한 것보다 더 많은 내용과 Demo를 포함하고 있으며 최종 Summary는 생략하겠습니다. Microsoft 지원을 통해 한달간 무료로 Pluralsight의 강의를 들으실 수도 있습니다.

'Programming > etc' 카테고리의 다른 글

(Json.NET) Custom Serialization  (0) 2018.03.30
(Json.NET) Settings & Attributes  (0) 2018.03.30
(Json.NET) Serialization Fundamentals  (0) 2018.03.30
AND

Copyright

이 모든 내용은 Pluralsight에 Xavier Morera가 올린 'Getting Started with JSON in C# Using Json.NET'라는 강의의 네번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/json-csharp-jsondotnet-getting-started/table-of-contents).


Content

Serialization Fundamentals

Settings & Attributes

Custom Serialization

Performance Tips

LINQ to JSON

JSON & XML

Binary JSON (BSON)

Json.NET Schema


Outline

Conditional Serialization

Custom JsonConverter

Callbacks

ITraceWriter for logging and debugging



Conditional Serialization

Conditional serialization may not be the case that you want to serialize an object as is, but instead only serialize based on specific conditions and this is what conditional serialization is for.


You can specify conditions in your code using ShouldSerialize which works by creating a function with a bool return type that has the name ShouldSerialize and the property name.


public class AuthorSS

{

public string Name { get; set; }

public bool IsActive { get; set; }

public string[] Courses { get; set; }

public bool ShouldSerializeCourses()

{

//If Author IsActive then Courses will be serialized

return IsActive;

}

}


target class(AuthorSS) to serialize has to have boolean member(IsActive) with bool ShouldSerialize"TargetMemberName"() method

Before Serializing the class, If IsActive set to true. then target member will be serialized. if not target member is not serialized.


Or you can use IContractResolver

The IContractResolver is very useful when you use classes that you have not defined or you do not want to add the ShouldSerialize methods to those classes, or if it's a third party code you can't modify the code or you decide you prefer to avoid placing attributes. 


public class SelectiveContractResolver : DefaultContractResolver

{

private IList<string> propertiesList = null;


public SelectiveContractResolver(IList<string> propertiesToSerialize)

{

propertiesList = propertiesToSerialize; //Get strings of property names which want to serialize

}


protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)

{

IList<JsonProperty> properties = base.CreateProperties(type, memberSerialization);

return properties.Where(p => propertiesList.Contains(p.PropertyName)).ToList();

}

}


...

var contractResolver = new SelectiveContractResolver(propertiesToSerialize); //propertiesToSerialize is list of property name

string jsonstring = JsonConvert.SerializeObject(author, new JsonSerializerSettings

{

Formatting = Formatting.Indented,

ContractResolver = contractResolver

});



Custom JsonConverter

Json.NET provides JsonConvert as an easy to use wrapper over the JsonSerializer class to allow quick and easy conversion between .NET and JSON text. however, it may be possible that you want to extend or customize the serialization and deserialization process with a custom JSON converter based on the JsonConvert to fit exactly to your needs by overriding methods as required.


The JsonConverter class is the class responsible of converting from an object to JSON text and vice versa. It is extremely useful and easy to use, but what happens if you want to have finer control over the serialization and deserialization process? Well, you can create your own CustomJsonConverter class.


1. Create your own converter

2. Derived from JsonConvert

3. Override methods as needed

4. Set JsonSerializerSettings.Converters as a List<JsonConverter> with own custom converter



Callbacks

Serialization callbacks are methods that are raised before and after the serialization and deserialization process. They let you manipulate the objects or perform any operation before and after. A good example for using serialization callbacks is if you want it to have a functionality that logs the serialization time. The methods are OnSerializing and OnDeserializing, which are called before the conversion takes placed and OnSerialized and OnDeserialized which are called when the process completes.


public class Author

{

private Stopwatch timer = new Stopwatch();


public int age;

public string name { get; set; }

...


[OnSerializing]

internal void OnSerializingMethod(StreamingContext context)

{

timer.Reset(); timer.Start();

}


[OnSerialized]

internal void OnSerializedMethod(StreamingContext context)

{

timer.Stop();

}


[OnDeserializing]

internal void OnDeserializingMethod(StreamingContext context)

{

timer.Reset(); timer.Start();

}


[OnDeserialized]

internal void OnDeserializedMethod(StreamingContext context)

{

timer.Stop();

}

}



Logging and Debugging with ITraceWriter (Skip)

Debugging the serializer is not a common scenario, as in most cases everything just works, but what if you wanted to debug or if you want to understand exactly the serialization process, or you're running into an error and can't figure out what is the cause? Then you need the ITraceWriter. ITraceWriter is the method used for debugging the serialization process. Json.NET comes with a MemoryTraceWriter which logs all debugging information in memory. It's quick and easy to use or you can create your own custom TraceWriter.



출처

이 모든 내용은 Pluralsight에 Xavier Morera가 올린 'Getting Started with JSON in C# Using Json.NET'라는 강의의 네번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/json-csharp-jsondotnet-getting-started/table-of-contents). 제가 정리한 것보다 더 많은 내용과 Demo를 포함하고 있으며 최종 Summary는 생략하겠습니다. Microsoft 지원을 통해 한달간 무료로 Pluralsight의 강의를 들으실 수도 있습니다.

'Programming > etc' 카테고리의 다른 글

(Json.NET) Performance Tips  (0) 2018.03.30
(Json.NET) Settings & Attributes  (0) 2018.03.30
(Json.NET) Serialization Fundamentals  (0) 2018.03.30
AND

Copyright

이 모든 내용은 Pluralsight에 Xavier Morera가 올린 'Getting Started with JSON in C# Using Json.NET'라는 강의의 세번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/json-csharp-jsondotnet-getting-started/table-of-contents).


Content

Serialization Fundamentals

Settings & Attributes

Custom Serialization

Performance Tips

LINQ to JSON

JSON & XML

Binary JSON (BSON)

Json.NET Schema


Outline

Settings and Attributes

Settings

Attributes


Settings and Attributes

A setting is a user preference that is supplied during the conversion process. It can be specified as a property on the JsonSerializer class or using the JsonSerializer settings on JsonConvert.


An attribute is a declarative tag that is supplied on classes, properties, and more, that is taking into account during the serialization and deserialization process.



Settings

DateFormatHandling

With DateFormatHandling you tell Json.NET how to handle dates. e.g., the ISO date format or the Microsoft date format.


MissingMemberHandling

With MissingMemberHandling you tell Json.NET what to do when the JSON contains a member that is not defined. You can ignore or you can raise an error.


ReferenceLoopHandling

With ReferenceLoopHandling you tell Json.NET what to do when there is an object that references itself. You can ignore, raise an error, or serialize.


NullValueHandling

With NullValueHandling you tell Json.NET what to do when it runs into null values, both on serialization and deserialization.


DefaultValueHandling

With DefaultValueHandling you specify how to use the default values that are set using the DefaultValue attribute. [DefaultValue(3)] int defaultAgeIsThree;

Can ignore property which was set to default value when Serializing

Can populate default value to a property when Deserializing


ObjectCreationHandling

With ObjectCreationHandling you tell Json.NET how to handle objects that are created during the deserialization. By default, Json.NET sets values and appends values to existing collections. This might be the desired behavior in some cases, but in others it might not. You can specify if you want to reuse or replace the objects or collections that are set. This is particularly useful when you have constructors that populate values before the deserialization process.


TypeNameHandling

TypeNameHandling is very important because it tells Json.NET to preserve type information that's very useful when you're serializing and deserializing.


TypeNameAssemblyFormat

With TypeNameAssembly you tell Json.NET how you want type names written during the serialization process.


Binder

With Binder you tell Json.NET how to resolve type names to .NET types.


MetadataPropertyHandling


ConstructorHandling

ConstructorHandling is a way of telling Json.NET to specify which constructor to use, even if it's not a public constructor.


Converters

Converters is a way of telling Json.NET which converters you want to use during the deserialization and serialization process.


ContractResolver

With ContractResolver you specify to Json.NET how you want to control the serialization and deserialization without having attributes in your classes.


TraceWriter

TraceWriter is used for logging and debugging, and with Error you specify to Json.NET how is it that you want to handle errors.


Error

With Error you specify to Json.NET how is it that you want to handle errors.



Attributes

Attributes are declarative tags that are placed on classes, properties, and more and they provide additional information to Json.NET on how to do the serialization and deserialization process.


in Json.NET

JsonObjectAttribute

The JsonObjectAttribute is placed on classes to tell Json.NET how to serialize as a JsonObject.


JsonArrayAttribute

The JsonArrayAttribute is placed in collections and it tells Json.NET to serialize as a JsonArray.


JsonDictionaryAttribute

The JsonDictionaryAttribute is placed in dictionaries and it tells Json.NET how they should be serialized as JSON objects.


JsonPropertyAttribute

The JsonPropertyAttribute is used in fields and properties to control how they're serialized as properties in JsonObjects.


+ [JsonProperty(PropertyName = "AuthorName", Required = Required.Always, Order = 2)]

public string name { get; set; } 


If Json.NET try to serialize the class without value of above property(name) throws an error - Useful when a specific member to be set before serialize.

And Serialized PropertyName set to "AuthorName". Setting PropertyName can be implicit like below "WhereInTheWorld"


+ [JsonProperty("WhereInTheWorld", DefaultValueHandling = DefaultValueHandling.Ignore)]

[DefaultValue("Costa Rica")]

public string location { get; set; }


if location property is set to "Costa Rica" just as same as DefaultValue above, then DefaultValueHandling.Ignore Attribute makes Json.NET not to serialize location property at all. It's useful to save some bytes.


JsonIgnoreAttribute

JsonIgnore tells Json.NET to ignore and do not include a property during serialization.



MemberSerialization OptIn, OptOut, Fields


[JsonObject(MemberSerialization = MemberSerialization.OptIn)] //Only JsonProperty attached member is serialized

[JsonObject(MemberSerialization = MemberSerialization.OptOut)] //Just looking for which ones to ignore

[JsonObject(MemberSerialization = MemberSerialization.Field)] //Add some string on autogenerated field 

public class AuthorJsonObjectOptIn

{

private string privateField;

[JsonProperty] private string privateFieldWithAttribute; //private field also serialized

[JsonProperty] public string name { get; set; }            //Only this two member get serialize

public string[] courses { get; set; }

public DateTime since;

[NonSerialized] public bool happy;                //Ignore this two member and serialize else including private member

[JsonIgnoreAttribute] public object issues { get; set; }

}



JsonConverterAttribute

The JsonConverter can be placed on classes, fields, or properties and it tells Json.NET to use a specific JsonConverter during the serialization process.


e.g. [JsonConverter(typeof(StringEnumConverter))] public Relationship relationship { get; set; }  //Relationship is of type enum

if no Converter here, Json.NET serialize relationship(of type enum) as string like 1 or 2...

But with this converter it uses text in enum


JsonExtensionDataAttribute

The JsonExtensionDataAttribute is placed on a collection field or property and it tells Json.NET to use this as a catchall bucket to include any properties that do not have any matching class members.


JsonConstructorAttribute

The JsonConstructor attribute is placed on a constructor and it tells Json.NET to use this constructor during deserialization.



in Standard .NET Attributes

SerializableAttribute

The SerializableAttribute is used to indicate that a class can be serialized.


DataContractAttribute

The DataContractAttribute is used to specify that the type or class implements a data contract and is serializable.


DataMemberAttribute

 The DataMemberAttribute which when applied to a member of a type specifies that the member is part of a data contract.


NonSerializedAttribute

NonSerializedAttribute which tells Json.NET that a particular field should not be serialized.



출처

이 모든 내용은 Pluralsight에 Xavier Morera가 올린 'Getting Started with JSON in C# Using Json.NET'라는 강의의 세번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/json-csharp-jsondotnet-getting-started/table-of-contents). 제가 정리한 것보다 더 많은 내용과 Demo를 포함하고 있으며 최종 Summary는 생략하겠습니다. Microsoft 지원을 통해 한달간 무료로 Pluralsight의 강의를 들으실 수도 있습니다.

'Programming > etc' 카테고리의 다른 글

(Json.NET) Performance Tips  (0) 2018.03.30
(Json.NET) Custom Serialization  (0) 2018.03.30
(Json.NET) Serialization Fundamentals  (0) 2018.03.30
AND

Copyright

이 모든 내용은 Pluralsight에 Xavier Morera가 올린 'Getting Started with JSON in C# Using Json.NET'라는 강의의 두번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/json-csharp-jsondotnet-getting-started/table-of-contents).


Content

Serialization Fundamentals

Settings & Attributes

Custom Serialization

Performance Tips

LINQ to JSON

JSON & XML

Binary JSON (BSON)

Json.NET Schema


Outline

Serialization And Deserialization

JsonSerializer and JsonConvert

JsonTextReader and JsonTextWriter

Dates in JSON

Error Handling


Serialization And Deserialization

Serialization and deserialization is the main functionality of Json.NET. Serialization and deserialization involves taking a data structure or object and converting it back and forth between JSON text and .NET objects. A .NET object when serialized can be stored as a stream of bytes of file or in memory and later it can be used to recreate the original object. You have to be careful though. In some cases there are private implementation details that are not available when deserializing or serializing, so you need to review the recreated object to determine if there is any information missing. In the serialization and deserialization process, you map property names and copy their values using the main JSON serializer class with the support of JsonReader and JsonWriter.



JsonSerializer and JsonConvert

The JsonSerializer class is a straightforward way of converting between JSON text and .NET objects. It provides a great deal of control and customization, being able to read and write directly to streams via JSON text reader and JSON text writer.

Simply use the serialize and deserialize methods.


JsonSerializer serializer = new JsonSerializer();

serializer.Serialize()...


And it gets even better. Json.NET comes with a very easy to use wrapper over JsonSerializer called JsonConvert that makes the serialization process that easy for most scenarios.

Simply use the SerializeObject and DeserializeObject methods.


string author = JsonConvert.SerializeObject(AuthorObject);

DeserializeObject<Author>(author);


Caution : There are many classes in Json.NET that provide async methods. Stay away from them as they are now marked as obsolete.



Demo note

1. Formatting.Indented setting on JsonConvert.SerializeObject method

2. PreserveReferenceHandling.Objects setting on JsonConvert.SerializeObject method


Whenever you're using Json.NET to serialize a class, by default it will serialize the object by value. This means that is if you have a class that has two references to another class, it will basically create two copies of the values. This might not be a problem if you're just serializing the class, but what happens if you want to deserialize, this is converted back again from JSON text to a .NET object and you want to have two references to the same object instead of two different objects and this is where preserving object references comes into play.


자기자신을 포함하는 리스트를 갖는 class를 Serialize할 때

PreserveReferenceHandling.Objects(want to preserve object) setting on JsonConvert.SerializeObject method

==> preserve reference to the objects, so when deserializing, Deserializer will not create duplicate classes.


3. Dynamic, ExpandoObject

dynamic varName = new ExpandoObejct(); //ExpandoObject class == members can be Dynamically added and removed at runtime.

... add members to varName...

then just pass varName to JsonConvert.SerializeObject method, and Deserialize it with dynamic


4. JsonConvert.DesrializeObject<Dictionary<string, string>>(...)


5. AnonymousType

JsonConvert.DeserializeAnonymousType method



JsonTextReader and JsonTextWriter

JsonConvert is useful but let's learn a little bit more about the control and performance that Json.NET provides to you via the JsonSerializer class. Let's start first with JsonReader and JsonWriter.


The JsonTextReader is for reading JSON. It's non-cache and forward only and it's used for large objects and files.

The JsonTextWriter which is used for creating JSON. It's also non-cached, forward only, and allows you to have a lot more control in improvements in performance.



Demo note

1. Using the JsonSerializer with the aid of a StreamWriter


serializer = new JsonSerializer();

serializer.NullValueHandling = NullValueHandling.Ignore; //setting to ignore null value when serializing

serialier.Formatting = Formatting.Indented;                   //indenting make file bigger

using (StreamWriter sw = new StreamWriter(@"..\..\jsonfilename.json"))

{

using (JsonWriter writer = new JsonTextWriter(sw))

{

serializer.Serialize(writer, author); //author is an instance to serialize

}

}


2. JsonConvert and ultimately the JsonSerializer class use reflection to convert from JsonText to .NET classes. Even though Json.NET is very fast by using reflection it makes it a little bit slower than it can actually be and thus we have the JsonTextReader, which does not use reflection and provides the fastest way of reading JSON text.


JsonTextReader jsonReader = new JsonTextReader(new StringReader(jsonText)); //jsonText is a string

while(jsonReader.Read()) //iterate strings in jsonText

{

if (jsonReader.Value != null) Console.WriteLine("Token: {0}, Value: {1}", jsonReader.TokenType, jsonReader.Value);

else ....skip...

}


TokenType : Start/EndObject, PropertyName, Date, Boolean, String, Integer, Start/EndArray... etc

The JsonTextWriter basically iterated over the entire jsonText, retrieving one token at a time. 


3. Write jsonText in a manual way for performance and control


StringWriter se = new StringWriter();

JsonTextWriter writer = new JsonTextWriter(sw);


writer.Formatting = Formatting.Indented; //at the beginning!


writer.WriterStartObject();

wirter.WritePropertyName("name");

writer.WriteValue("EMPister");

writer.WritePropertyName("courses");

writer.WriteStartArray();

writer.WriteValue("Json course");

writer.WriteEndArray();

writer.WritePropertyName("since");

writer.WriteValue(new DateTime(2018, 03, 29));

writer.WritePropertyName("happy");

writer.WriteValue(true);

writer.WritePropertyName("issues");

writer.WriteNull();

writer.WriterStartObject();

...

writer.WriteEndObject();

writer.WriteEndObject();

writer.Flush();


string jsonText = sw.GetStringBuilder().ToString();

//In .NET string are immutable so if you keep modifying a string without using a StringBuilder,

//you will be suffering from a deep performance hit



Dates in JSON

1. Without any setting, Json.NET's default is ISO 8601 (2009-07-11T23:00:00)


2. Use Microsoft date format with setting


JsonSerializerSetting settingsMicrosoftDate = new JsonSerializerSettings

{

DateFormatHandling = DateFormatHandling.MicrosoftDateFormat     // "\/Date(1247374800000-0600)\/"

};


3. Use custom date converter


... = JsonConvert.SerializeObject(author, Formatting.Indented, new IsoDateTimeConverter());


4. Use Custom format date


JsonSerializerSetting settingsCustomDate = new JsonSerializerSettings

{

DateFormatString = "dd/mm/yyyy"

};


5. Use JavaScript date


... = JsonConvert.SerializeObject(author, Formatting.Indented, new JavaScriptDateTimeConverter());  // "new Date(1247374800000)"



Error Handling


List<string> errors = new List<string>();

JsonSerializerSettings jSS = new JsonSerializerSettings

{

Error = delegate(object sender, ErrorEventArgs errorArgs)  //1 way

{

errors.Add(errorArgs.ErrorContext.Error.Message);

errorArgs.ErrorContext.Handled = true;

},

Error = HandleDeserializationError,                                //2 way

Converters = { new IsoDateTimeConverter() }

}


private static void HandleDeserializationError(object sender, ErrorEventArgs errorArgs)

{

var currentError = errorArgs.ErrorContext.Error.Message;

//Test if data in other format

errorArgs.ErrorContext.Handled = true;

}



or just throw error with try-catch block



출처

이 모든 내용은 Pluralsight에 Xavier Morera가 올린 'Getting Started with JSON in C# Using Json.NET'라는 강의의 두번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/json-csharp-jsondotnet-getting-started/table-of-contents). 제가 정리한 것보다 더 많은 내용과 Demo를 포함하고 있으며 최종 Summary는 생략하겠습니다. Microsoft 지원을 통해 한달간 무료로 Pluralsight의 강의를 들으실 수도 있습니다.

'Programming > etc' 카테고리의 다른 글

(Json.NET) Performance Tips  (0) 2018.03.30
(Json.NET) Custom Serialization  (0) 2018.03.30
(Json.NET) Settings & Attributes  (0) 2018.03.30
AND

Copyright

이 모든 내용은 Pluralsight에 Dino Esposito가 올린 'Modern Software Architecture: Domain Models, CQRS, and Event Sourcing'라는 강의의 마지막 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/modern-software-architecture-domain-models-cqrs-event-sourcing/table-of-contents). 강의 원작자분께 게시허가도 받았습니다.


Content

Discovering the Domain Architecture through DDD

The DDD Layered Architecture

The "Domain Model" Supporting Architecture

The CQRS Supporting Architecture

Event Sourcing

Designing Software Driven by the Domain


Outline

Dealing with Legacy Code

Revisiting CRUD Systems

UX-drive design

Pillars of Modern Software



Dealing with Legacy Code

There are many known ways to define legacy code. Often by legacy code we mean code that exists that some other people wrote and that is complex and undocumented enough that everybody is scared to touch. Legacy code is primarily code that just works. It might not be doing things in the most appropriate way for the current needs of the business, so you feel like it has to be properly refactored and possibly rewritten. In other words, you don't like to have it around. But because it works and because everybody is constantly late on something, there is never time to do it, and in the end, you can hardly get rid of it.


However, sometimes what we call legacy code is not necessarily badly and poorly written code. Sometimes it's just coded now for the current state of the business should be doing additional things or differently. But for a number of reasons, it's risky to touch it, or it just takes time because of the complexity it carries.


Common Aspects of Legacy Code

Has an established and implicit model

Doesn't typically have a public programmable API

Written to have some work done, not to be reused

Written years ago according to practices now obsolete


How to incorporate legacy code in new applications? 

My favorite approach is trying to expose the legacy code as a service if all possible. If not, I would seriously consider rewriting. But this is just a general guideline I use, but in the end, it always depends case by case.


Legacy Code as a Service

So I'd say you start with the idea of rewriting the system and get all the abstractions you need. If some features you need to rewrite then exist in the legacy system, I'd consider incorporating existing assets as services rather than rewriting everything from scratch. Needless to say, this only works if the new system you need has exactly the same features and the same behavior. If you can disconnect the legacy code and recompile it as an independent piece of code, then put it in a bounded context, expose it as a service, and create a façade around it to make it work with the rest of the new system. It may not work, but if it works, it's just great.


Not because you can do something, you should be doing that particular thing.

Not all legacy assets are equal. Some can be reused as services, some not. Some are just old and obsolete.

Before integrating legacy code, make sure you carefully evaluate the costs of not rewriting from scratch and also the costs of integrating legacy code as a service. Next, if legacy code can be turned into a service, then just let it be and focus on other things to do.



Revisiting CRUD Systems

CRUD is a long-time popular acronym to indicate a system that focuses on four fundamental operations in relationship to both the user interface and storage, create, read or retrieve, update, and delete. In this regard, it's essentially correct to state that all systems are CRUD systems at least to some extent.


The CRUD cycle describes the core functions of a persistent database, typically a relational database. The code implements the CREATE, READ, UPDATE, and DELETE operations, and saves the resulting state to the database. While it's probably correct to state that merely all systems are CRUD to some extent, the devil is in the details of what actually turns out to be the object of those operations. Is it a single individual entity or is it a more complex and articulated data representation? Let's try to list a few common attributes of a CRUD system.


First and foremost, a CRUD system is database-centric. The model of data available to the code is exactly the same being persisted. Deletions and updates affect directly the data stored. This means that at any time you know the current state, but you are not tracking what has really happened. A CRUD system for the most part has relatively basic business logic and is being used by one or very few users and deals with very elemental collections of data, mostly one-to-one with database tables. In summary, a CRUD is typically quick and easy to write and subsequently it often looks unrealistic these days.


When we hear or pronounce the words it's basically a CRUD, we are right in that we want the four basic operations being implemented. But tracking actions or not, the amount of business logic and rules, concurrency and dependencies between data entities change significantly the scope of the system and raise significantly the level of complexity and effort. So it's basically a CRUD system, but must be devised in quite a different way to be realistic and effective as of today.


Sure, a CRUD system is database-centric. In modern software, however, that persistence model is just one model to think about. Sometimes to better process relationships between collections of data, you need to build a different model that expresses behavior while saving state or just actions through a different model. More often than not, the context of change must be tracked. An update cannot simply be overriding the current state of a given record. It might be necessary that it's tracked whether it is an update to the state, tracking to delta, and all of them in the life of a data entity. Plenty of business logic, concurrency issues, and interconnected entities, graphs of objects populate the modern software and systems must take that into account.


So, we still have CRUD operations against a database, but CREATE deals with graphs of entities, READ fills out views, UPDATE and DELETE log the change to the current state of the system, and today, C, U, and D, CREATE, UPDATE, and DELETE are commands and reads, the R, are queries, and this why the CQRS approach is vital and fundamental for today's applications.



UX-drive design

What you see is what you get is an old slogan of the graphical user interfaces of development tools like Visual Basic, but what you see is what you get is also today a philosophy we can apply to software architecture. What users perceive of each application is what they see(User Interface) and what they get(User Experience) as they interact with the application. #The user experience is the experience that users go through when they interact with a given application.


In light of the user experience, Top-down approach is more effective way to architect a system then the bottom-up approach. The bottom-up approach to software architecture is based on the idea that you understand requirements and start building the foundation of the system from the persistence model. Your foundation is a layer of data with some endpoints that are looking for an outlet. As long as users are passively accepting any UI enforcements. But now more and more users are actively dictating user interface and user experience preferences. Therefore, you'd better start designing from some front-end they like. Endpoints to connect to the rest of the system are exposed from the presentation layer and the rest of the system is then designed just to match those endpoints. And persistence is designed to save data you need to save in the shape that data counts your way down the stacks. In the end, it is presentation and everything else is sort of a big black box.


The whole point of top-down design is making sure that you are going to produce a set of screens and an overall user experience that fully matches user expectations. Performance and even business logic can be fixed and fine-tuned, but if you miss the user experience, customers may have to alter the way they do business and go through their tasks. It may not be ideal.


UX-driven Design in 3 Steps

1. Building up UI forms and screens as users love them

You can use Y frames and Y framing tools for this purpose. At this point, once screens have been approved, you have a complete list of triggers that can start any business process.

2. Implement workflows and bind workflows to presentation endpoints

3. A workflow represents a business process and you create old layers that need be there

repositories, domain services, whatever else that serves the purpose of implementing successfully the business process


For the whole method to work, however, it's key that you hold on and iterate on the UI forms approval process(step 1) until you get users signing off approving explicitly. And when they say yes only then you proceed. Any time you appear to waste, this stage is saved later by not likely having to restructure the code because of a poor or wrong user experience.


In summary, sign off on what users want and use sketches and Y frames to get their approval. A sketch is a freehand drawing mostly made to capture an idea. A wireframe is a more sophisticated sketch with layout navigation and content information. The point is avoid doing any seriously billable work until you are certain about the front-end of what you're going to create.


More in detail, UX driven designs suggest you have a basic flowchart for each screen

Determine what comes in and out of each screen and create view-model classes

Make application layer endpoints receive and return such DTO(data transfer object) classes

Make application layer orchestrate tasks on layers down the stack

So repositories, domain services, external services, and whatever else you may need to set up and work with


Responsibilities of the UX Architect

Defining the information architecture and content layout

Defining the ideal interaction, which means essentially storyboards for each screen and each UI trigger

Being around the definition of the visual design

Running usability reviews


Tools for the UX Architect

Balsamiq, UXPin, Infragistics Indigo, JustInMind


UX Architect in the end is only a role and the same individual can be at the same time acting as the UX and/or the solution software architect.



Pillars of Modern Software

A few final words about the state of the art of today's software architecture then are in order.



First, the definition of domain-driven design should be revisited to add emphasis on the tools for domain analysis, such as ubiquitous language, bounded contexts, and context maps.


Second, in the building of each bounded context, the layer the architecture is key to having best of breed today layers over tiers are preferable as scalability these days can easily be achieved, obtained with a stateless single tier, small, compact, simple servers, implemented perhaps as web roles in some Cloud platform and then scaled according to the dashboard and the rules and the possibilities of the Cloud platform.


Third, the design of systems should be top-down, as it starts from what users really want and expect. This means great user experience by design and built from the ground up.


Finally, to achieve everything to build actual back-end of the system, CQRS command and query responsibilities, segregation, and event sourcing are the new and important hot things. These days separating command and query stacks makes everything in the building of the application more natural, convenient, and simple to code, and even simple to optimize, because the stacks are separated and the command part and the query part can be deployed and optimized without effecting each other. Event-based persistence, yet another cool aspect of modern software architecture, lets you not miss a thing and makes the entire solution subsequently easy to extend in the future by adding more features and supporting more notifications, more commands, and subsequently, more events. 



출처

이 모든 내용은 Pluralsight에 Dino Esposito가 올린 'Modern Software Architecture: Domain Models, CQRS, and Event Sourcing'라는 강의의 마지막 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/modern-software-architecture-domain-models-cqrs-event-sourcing/table-of-contents). 제가 정리한 것보다 더 많은 내용과 Demo를 포함하고 있으며 최종 Summary는 생략하겠습니다. Microsoft 지원을 통해 한달간 무료로 Pluralsight의 강의를 들으실 수도 있습니다.

AND

Copyright

이 모든 내용은 Pluralsight에 Dino Esposito가 올린 'Modern Software Architecture: Domain Models, CQRS, and Event Sourcing'라는 강의의 여섯번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/modern-software-architecture-domain-models-cqrs-event-sourcing/table-of-contents). 강의 원작자분께 게시허가도 받았습니다.


Content

Discovering the Domain Architecture through DDD

The DDD Layered Architecture

The "Domain Model" Supporting Architecture

The CQRS Supporting Architecture

Event Sourcing

Designing Software Driven by the Domain


Outline

Event Sourcing Introduction

Event Sourcing at a Glance

Events as the Data Source

Event-based Persistence

Data Projections from Stored Events

Event Sourcing in Action

Event-based Data Stores



Event Sourcing Introduction

The real world is mostly made of events. We see events after facts, and events actually are notifications of things that just happened. In software instead, we rarely use events in the business logic. We tend instead to build abstract models that hopefully can provide a logical path for whatever we see in the domain, and sometimes working this way we end up with a root object we just call God or Universe. Events may bring some key benefits to software architecture. Primarily this is because events are immutable pieces of information, and by tracking events, you never miss a thing that happens in and around the business domain, and finally, events can be replayed, and by replaying events you can process their content and build on top of that multiple projections of the same core data.



Event Sourcing at a Glance

Event sourcing is a design approach based on the assumption that all changes made to the application state during the entire lifetime of the application are stored as a sequence of events. Event sourcing can be summarized by saying that we end up having serialized events as the data building blocks of the application. Serialized events are actually the data source of the application.


This is not how the vast majority of today's applications work. Most applications today work by storing the current state of domain entities and use that stored state as the starting point to process business transactions. 


e.g. Let's say you have a shopping cart. How would you model a shopping cart?

Reasonably, if you're going to model it, you would probably add a list of ordered products, payment information, for example, credit card details, maybe shipping address. The shopping cart has now been modeled and the model is a faithful representation of the internal logic we commonly associate with a shopping cart.


But here is another way to represent that same information you can expect to find around the shopping cart. Instead of storing all the pieces of information in the columns of a single record or in the properties of a single object, you can describe the state of the shopping cart through the sequence of events that brought it to contain a given list of items. So, add first item, add second item, add payment information, update the second item, for example, to change the quantity or the type of a product, maybe then remove the first item, add shipping address. All these events relate to the same shopping cart entity, but we don't save in any place the current state of the shopping, but just the steps and related data that brought it to be what it is actually under our eyes. By going through the list of events then, you can rebuild at any time that state of the shopping cart. This is actually an event-based representation of an entity.


For most applications, think for example of CRUD applications, today structural and event-based representations of entities are functionally equivalent, and most applications use to store their state via snapshot, so they just store the current state and they ignore events. More in general, the picture you see now onscreen is only the tip of the iceberg. There is a more general and all-encompassing way to look at storage that incorporates events, but this is just under the surface. On top of an event-based representation of data, you can still create as a snapshot database the current state representation that returns the last known good state of a domain entity. This is, at the end of the day, event sourcing.


Key Facts of Event Sourcing

An event is something that has happened in the past

Events are expression of the ubiquitous language

Events are not imperative and are named using past tense verbs

Have a persistent store for events

Append-only, no delete

Replay the (related) events to get to the last known state of an entity

Replay from the beginning or a known point (snapshot)


Events express altogether the state of a domain entity. To get that state, you need to replay events. To get the full state, you should replay from the beginning of the application timeline. Sometimes this may require you process too much data. In this case, you can define snapshots along the way and replay events as the delta of the state from the last known snapshot. In the end, a snapshot is the state of the entity at a given time.


An event is something that happened in the past. This is a very important point about events, and should be kept clearly in mind. 

Once stored, events are immutable. Events can be duplicated and replicated, especially for scalability reasons.

Any behavior associated with a given event has been performed the moment in which the event is notified, replaying the event, in other words, doesn't require to repeat the behavior.

When using events, you don't miss a thing. You track everything that happened at the time it happened, and regardless of the effects it produced.


With events, any system data is saved at a lower abstraction level than today.



Events as the Data Source

CQRS is the dawn of a new software design world, and events are what we find when the dawn has actually turned into a full day.


At some point, relational databases have become the center of the universe as far as software design is concerned. They brought up the idea of the data model and persistence of the data model. It worked. And it still works. But it's about time you reconsider this approach, because chances are that it shows incapable of letting you achieve some results, getting more and more important in the coming days. Events are a revolutionary new approach to software design and architecture, except that events are not really new and not really revolutionary. Relational databases themselves don't manage current state internally, even though they expose current state outside. But internally, relational databases were looking at the actual actions that modify the initial state. Events in the end are not a brilliant new idea of these days. Thirty years ago old apps still used something based on the concept of events. Simply there was no fancy name at the time for the approach they were using. Events may be the tokens stored in the application data source in nearly every business scenario. Events just go at the root of the problem and offer a more general storage solution for just every system.


So, what should you do today? I see two options.

First option, you can insist on a system that primarily saves the current state, but start tracking relevant application facts through a separate log.

Second option, you store events and then replaying through the event stream you build any relevant facts you want to present to users as data.


So in other words, from events you create a projection of data in much the same logical way you build a projection of computed database columns when you run certain SQL queries.


To decide whether or not in your specific scenario you need events, consider what follows, the two points you see onscreen now.

Is it important to track what was added and then removed?

Is it important business-wise to track when an item was removed from the cart?


If you say yes, then you probably need events.


Storing events in a stream works really well, I would even say naturally with the implementation of commands. And commands submitted by the user form a natural stream of events. At the same time, queries work much better if you have structured data, so in the end you need both. You need commands for storing and queries and classic current state storage for queries. This is the whole point of CQRS, command and query responsibility segregation. As an architect, you only have to decide whether you want to have a command stack based on an event stream that works in sync with, say, an orders table, or more conservatively, you want to start with orders tables and then just record an event stream separately for any relevant fact. It's a sort of middle ground to proceed step-by-step ahead.



Event-based Persistence

In software, persistence is made of four key operations through which developers manipulate the state of the system. They are CREATE, UPDATE, and DELETE, to alter the state, and QUERIES to read the state without altering. A key principle of software and especially CQRS inspired software, is that asking a question should not change the answer.


CREATE

Let's see what happens when a new relevant entity is created and must be persisted in an event-based system. There is a request coming from the presentation layer or in some other known UI asynchronous way, the command stack processes that, and appends a new record, say, an order, with all the details. The existing data store is extended with a new item that contains all the information it needs to be immutable. If, say, the information is required, the current rate is better stored inside the logged information and not read dynamically from elsewhere. In this way, the stored event is really immutable and self-contained. There's also some information of other type you need to have. Each event should be identified uniquely and in some way you need to give each event an-app specific code to indicate whether you're adding an order or an invoice. This can be achieved in a variety of ways, by type, if you use document store, or through one columns if you use a relational store. In addition, you want to have a timestamp so that it's clear when the operation took place, and any relevant ID you need to identify the entity better the aggregate being stored. Finally, full details of the record must be saved as well. If you are recording the creation of an order, you want to have all order details, including shipping payment, transaction ID, confirmation ID, order ID, and so forth. If necessary, payment and shipping operations may in turn generate other events being tracked as well. And finally, when it comes to event storage, it is transparent the technology you use so it can be relational document-based, graph-based, or whatever works for you. The CREATE operation is an event-based persistence scenario that is not much different from classic persistence. You just add a record.


UPDATE

Update is bit different, as you don't override an existing record, but just add another record that contains delta. You need to store the same information as we create operations, including a unique event ID, timestamp, code to recognize the operations, and more, and changes applied. You also need the aggregate ID and delta. If you only updated the quantity of a given product in a given order, in particular, you only store the new quantity and the product ID. Storage also in this case is transparent and can be relational, document-based, graph-based, or whatever works for you. Note that in some cases, you might want to consider also storing the full state of the entity along with the specific event information. This can be seen as a form of optimization so that you keep track of individual events, but also serialize the aggregate with the most updated state in the same place to speed up first level queries. I will have more to say about queries in awhile.


DELETE

The DELETE operations are analogous to UPDATE operations. Subsequently, we can say that the deletion is logical and consists in writing that the entity with a given ID is no longer valid and should not be considered for the purposes of the business. The information in this case is just made of event ID, timestamp, code of the operation, aggregate ID. There is no strict need of having delete specific information unless, for example, the system requires a reason for the deletion. Storage is transparent and can be also in this case whatever works for you.


UNDO

Events lend themselves very well to implement the UNDO functionality. The simplest way to have it done is by deleting physically the last record as if it never happened. In this way, there is a contrast with the philosophy of event sourcing while you never miss a thing and track just anything that happens. UNDO can also be a logical operation, which, for example, allows to track how many times a user attempted to undo things. But personally I wouldn't mind a physical deletion in this specific case. What's far more important instead is not deleting events in the middle of the stream. That would really be dangerous and lead to inconsistent data.


Query... see Replay



Data Projections from Stored Events

The main aspect of event sourcing is the persistence of messages, which enables you to keep track of all changes in the state of the application. By reading back the log of messages, you rebuild the state of the system, and at that point, you get to know the state of the system. This aspect is what is commonly called the replay of events.


Replay

Replay is a two-step operation.

First, you grab all events stored for a given aggregate

Second, you look through all events in some way and extract information from events and copy that information to a fresh instance of the aggregate of choice.


A key function one expects out of an event-based data store is the ability to return the full or partial stream of events. This function is necessary to rebuild the state of an aggregate out of recorded events.


public IEnumerable<GenericEventWrapper> GetEventStream(String id)

{

return DocumentSession

.Query<GenericEventWrapper>()

.Where(t => t.AggregateId == id)

.OrderBy(t => t.Timestamp)

.ToList();

}


As you can see from the code, it's all about querying records in some way using the AggregateId and Timestamp to order or restrict. Notice that you can code the same also using a relational database.


public class GenericEventWrapper

{

public string EventId { get; set; }

public string EventOperationCode { get; set; }

public DateTime TimeStamp { get; set; }

public string AggregateId { get; set; }

public DomainEvent Data { get; set; }

}


That structure of a generic event class depends on the application. It may be different or in some way constrained if you are using some ad-hoc tools for event sourcing. In general terms, an event class can be anything like the following code. Again, key pieces of information are EventId, something that allows to distinguish and easily query the type of the event, Timestamp, AggregateId, and of course, event specific data.


The actual rebuilding of the state consists in going through all events, grab information, and then altering the state of a fresh new instance of the aggregate of choice.


public static Aggregate PlayEvents(String id, IEnumerable<DomainEvent> events)

{

var aggregate = new Aggregate(id);

foreach (var e in events)

{

if (e is AggregateCreatedEvent) aggregate.Create(e.Data);

if (e is AggregateUpdateEvent) aggregate.Update(e.Data);

if (e is AggregateDeletedEvent) aggregate.Delete(e.Data);

}

return aggregate;

}


What you want to do is storing in the fresh instance of the aggregate the current state it has in the system, or the state that results from the selected stream of events. The way you update the state of the aggregate depends on the actual interface exposed by the aggregate itself, whether it's a domain class or relies on domain services for manipulation.


There are a few things to mention about event replay.

1. Replay is not about repeating commands for any generated events. Commands are potentially long-running operations with concrete effects that generate event data. Replay is just about looking into this data and perform logic to extract information from this data.

2. Event replay copies the effects of occurred events and applies that to fresh instances of the aggregate.

3. Stored events may be processed in different ways in different applications. There is a great potential here. Events are data rendered at a lower abstraction level than plain state. From events you can rebuild any projection of data you like, including the current state of aggregates, which is just one possible way of projecting data(Custom Projection of Event Data). Ad-hoc projections can address other, more interesting scenarios, like business intelligence, statistical analysis, what if, and why not simulation.


More specifically, let's say that you have a stream of events. You can extract a specific subset, whether by date or type or anything else. Once you've got the selected events, you can replay them and apply ad-hoc calculations(Perform different calculations) and business processes(Apply different forms of business logic) and extract just the custom new information you were looking for.


Another great point about events and replay of events is that streams are constant data, and because of that, they can be replicated easily and effectively, for example, to enhance the scalability potential of the application. This is actually a very, very pleasant side effect of event immutability.


What if you get to process too many events for rebuilding the desired projection of data? (Performance concern)

Projecting state from logged events might be heavy-handed and impractical with a very large number of events, and the number of logged events in many application be aware can only grow over time, because it's an append only store.


An effective workaround consists of setting a snapshot of the aggregate state or whatever business entities you use at some recent point in time. Instead of processing the entire stream of events, you serialize the state of aggregates at a given point in time and save that as a value. Next you keep track of the snapshot point and replay events for an aggregate from the latest snapshot to the point of interest.



Event Sourcing in Action (SKIP)



Event-based Data Stores

You can definitely arrange an event sourcing solution all by yourself, but some ad-hoc tools are appearing to let you deal with storage of events in a more structured way. The main benefit of using an event aware data store is that the tool like database guarantees essentially business consistency and full respect of the event sourcing approach.


Let's briefly look at one of these event-based data stores

Event Store : geteventstore.com

Event store works by offering an API for plain HTTP and .NET and the API is for event streams. In the jargon of event store, an aggregate equates to a stream in the store. No concerns about the ability of the event store database to manage and store potentially millions of events grouped by aggregates. This is not a concern of yours, the framework underneath the tool is able to work with these numbers. You can do three basic operations on an event stream and event store. You can write events, you can read events, in particular, you can read the last event, a specific event by ID and also a slice of events, and you can subscribe to get updates. This is the representation of an event as it is written to the store. As you can see, the timestamp is managed internally, and you only have an eventId, eventType, data, and there's no way that's important to delete events arbitrarily. Another interesting feature of event store is subscriptions. There are three types of subscriptions. One is volatile, which means that a callback function is invoked every time an event is written to a given stream. You get notifications from this subscription until it is stopped. Another subscription is catch-up, and it means that you'll get notifications from a given event specified by position up to the current end of the stream. So give me all events from this moment onward, and once the end of the stream has been reached, the catch-up subscription turns into a volatile, and you still keep on getting any new event being added to the stream. And finally, the persistent subscription address set the scenario when multiple consumers are waiting for events to process. The subscription guarantees that events are delivered to customers at least once, but possibly multiple times, and if this happens, the order is unpredictable. This solution is specially designed for high scalable systems, collaborative systems, and to be effective it requires a software design that supports the notion of hidden potency, so if multiple times an operation is performed, the effect is always the same. In particular, catch-up subscriptions are good for components called denormalizers, which play a key role in a CQRS, and in CQRS jargon, denormalizers just refer to those components that projections of data for the query stack.



출처

이 모든 내용은 Pluralsight에 Dino Esposito가 올린 'Modern Software Architecture: Domain Models, CQRS, and Event Sourcing'라는 강의의 여섯번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/modern-software-architecture-domain-models-cqrs-event-sourcing/table-of-contents). 제가 정리한 것보다 더 많은 내용과 Demo를 포함하고 있으며 최종 Summary는 생략하겠습니다. Microsoft 지원을 통해 한달간 무료로 Pluralsight의 강의를 들으실 수도 있습니다.

AND

Copyright

이 모든 내용은 Pluralsight에 Dino Esposito가 올린 'Modern Software Architecture: Domain Models, CQRS, and Event Sourcing'라는 강의의 다섯번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/modern-software-architecture-domain-models-cqrs-event-sourcing/table-of-contents). 강의 원작자분께 게시허가도 받았습니다.


Content

Discovering the Domain Architecture through DDD

The DDD Layered Architecture

The "Domain Model" Supporting Architecture

The CQRS Supporting Architecture

Event Sourcing

Designing Software Driven by the Domain


Outline

CQRS at a Glance

CQRS Regular

CQRS Premium

CQRS Deluxe



CQRS at a Glance

A single, all-encompassing object model to cover just any functional and non-functional aspects of a software system is a wonderful utopia, and sometimes it's even close to being an optical illusion



In module four, I briefly presented two ways to design a class for a sport match. One that incorporates business rules for the internal state and exposed behavior, and one that was a mere data transfer object trivial to persist, but devoid of business logic and with a public and filter the read write interface. In a single, all-encompassing object model perspective, the former class, the one with the reach behavior with methods like Start, Finish, and Goal, is just perfect.

But, are you sure that the same class would work well in a plain query scenario? When your user interface just requires to show the current score of a given match and that's it, is such a behavior rich class appropriate? Obviously, another simpler match class would be desirable to have, especially if it acts as a plain data transfer object.

So the question is, should you have both or should you manage to disable ethics of methods in query scenarios? In a single domain model scenario, none of the classes shown now are perfect, even though both can be adapted and used in a system that just does what it's supposed to do.

The behavior rich class is great in use cases in which the state of the system is being altered. The other is an excellent choice instead when the state of the system is being reported. The behavior rich class requires fixes to fully support persistence via an O/RM, and if used in a reading scenario, it would expose a behavior to the presentation layer, which could be risky. The other class has no business rules implemented inside, and because of its public getters and setters, is at risk of generating inconsistent instances.


CQRS (Command and Query Responsibility Segregation)

Defining a single class for commands and queries may be challenging, and you realistically can get it only at the cost of compromises, but compromises may be acceptable. Treating commands and queries differently through different stacks, however, is probably a better idea.


A command is then defined as an action that alters the state of the system and doesn't return data.

A query instead is an action that reads and returns data and doesn't alter the state of the system.


All that CQRS is all about is then implementing the final system so that both the responsibilities are distinct and have each its own implementation.



In terms of layers and system architecture, with CQRS we move from a classic multilayer architecture with presentation, application, domain, and infrastructure to a slightly different layout. In the new layout, presentation and infrastructure are unchanged. The application is crucial to have in the command stack, but can be considered optional in the query stack.

The huge difference is the implementation of the domain layer. In a CQRS, you may need to design a domain model only for the command stack, and can't rely on plain DTOs(Data Transfer Object : data container for moving data between layers. DTO is only used to pass data and does not contain any business logic. They only have simple setters and getters) and the direct data access in the query stack.


Aspects of CQRS

Why is CQRS then good for architects to consider?


Benefits

Separation of stacks allow for parallel and independent development, which means reuse of skills and people, and freedom to choose the most appropriate technologies without constraints(Distinct optimization). In addition, separation of stacks enables distinct optimization of the same stacks and lays the ground for scalability potential. The overall design of the software becomes simpler and the refactoring and enhancements come easier.


Flavors of CQRS

Quite simply, CQRS is a concrete implementation pattern, and it is probably the most appropriate for nearly any type of software application today regardless the expected lifespan and complexity. A point that often goes unnoticed is that there is not just one way of doing CQRS. I would recognize at least three different flavors of CQRS. Regular, Premium, and Deluxe.



CQRS Regular

CQRS is not just for overly complex applications, even though it's just there that it shines in concurrent collaborative, high traffic, high scale systems. The principle behind CQRS works well even with plain old CRUD applications.


CQRS for Plain CRUD Applications



So to make an existing CRUD system CQRS aware, instead of a monolithic block that takes care of database access for reading and writing, you have two distinct blocks. One for commands and database writes, and one for queries and database reads. If you are using Entity Framework or another object relational mapper framework to perform data access, all you do is just duplicate the context object through which you go down to the database. So you have one class library and model for commands, and one for queries. Having distinct class libraries is only the first step.



In the command stack, you just use the pattern that represents the best fit.

In the read stack, you just use the pattern that represents the best fit(O/RM of choice, ADO.NET, ODBC, Micro Frameworks, even stored procedures; whatever can bring data back the way you want). LINQ can help in a sense that it can easily bring iQueryable objects right in the presentation layer for direct data binding. Know that an IQueryable object describes a database query, but won't execute it until you call ToList or another analogous method. So this means that you can have high Queryable objects, carry it from the bottom of the system up to presentation, and you can result the query right to view model classes as expected by the front-end.

A nice tip in this context is using in the read stack of a CQRS solution, a read-only wrapper for the Entity Framework DbContext. In this way, when a query is performed, the presentation and application layers only have IQueryable data and write actions like those you can perform through save changes cannot just be performed. 


Demo : Read-only Database Facade

public class Database : IDisposable

{

private readonly QueryDbContext _db = new QueryDbContext();

public IQueryable<Customer> Customers { get => _db.Customers; }

public void Dispose() { _db.Dispose(); }

}


You can use a simple database class instead of the native Entity Framework DbContext, but wrap has a private member of this new database class, the same DbContext object as you get it from Entity Framework. Next, all you do is return generic DbSet objects as high queryable, rather than as plain DbSet of T. That's it. It is a simple, but effective trick to use.



CQRS Regular in Action (SKIP)



CQRS Premium


All applications are CRUD applications to some extent, but at the same time, not all CRUD applications are the same. In some cases, it may happen that the natural format of data processed and generated by commands, so the data that captures the current state of the system, is significantly different from the ideal way of presenting the same data to users. This is an old problem of software design, and we developers solved it for decades by using adapters.


In CQRS, just having two distinct databases sounds like an obvious thing to do, if data manipulation and visualization needs would require different formats. The issue becomes how to keep the two databases in sync.


The issue is having distinct data stores for commands and queries makes development simpler and optimizes both operations, but at the same time, it raises the problem of keeping the two stores in sync. For the time in which data is not synced up, your app serves stale data, and the question is, is this acceptable?


The dynamics of a CQRS premium solution when two distinct data stores are used



CQRS Premium in Action


The user interface places a request and the application layer sends the request for action down to the command stack. To see the value of a premium approach, imagine that the application is essentially an even based application, for example, a scoring app in which the user clicks buttons whenever some event is observed and leaves the business logic the burden of figuring out what to do with the notified event. The command stack may do some processing and then saves just what has happened. The data store ends up being then a sort of log of actions triggered by the user. This form of data store makes also so easy to implement undo functionality. You just delete the last recorded action from the log, and that's it. The data store, the log of actions, can be relational or it can also be a NoSQL document database. It's basically up to you and your skills and preferences. Synchronization where _____ happens within or outside the current business transaction consists of reading the list of recorded action for a given aggregate entity, for example, in this case, the match, and extracting just the information you want to expose to the UI listeners. For example, a live scoring application that just wants to know about the score of a match and doesn't care about who scored the goals and when. At this point, the read data store is essentially one or more snapshot databases for clients to consume as quickly and easily as possible.



Message-based Business Logic

When you think about a system with distinct command and query stacks, inevitably the vision of the system becomes a lot more task-oriented. Tasks are essentially workflows and workflows are a concatenated set of commands and events.


A message-based architecture is beneficial as it greatly simplified the management of complex, intricate, and frequently changing business workflows. However, such a message-based architecture would be nearly impossible to achieve outside the context of CQRS that keeps command and query stacks neatly separated.


So, what is the point of messages? Abstractly speaking, a message can either be a command or an event.


public class Message

{

public DateTime TimeStamp { get; set; }

public String SagaId { get; protected set; }

}


So in code, you usually start by defining a base Message class that defines a unique ID for the workflow, and possibly a time-stamp to denote the time at which the message was received.


public class Command : Message

{

public String Name { get; protected set; }

}


public class Event : Message

{

// Any properties that may help retrieving and persisting events

}


Next, you derive from this base Message class additional classes for denoting a command, which is a message with a name in the typical implementation, or an event which is essentially just the notification of something that has happened, and you can add to further derive event classes properties that may help in retrieving information associated with that fact.


An event carries data and notifies of something that has happened. A command is an action performed against the back-end that the user or some other system components requested. Events and commands follow rather standard naming conventions. A command is imperative and has a name like submit order command; an event instead denotes a thing of the past and is named like order created.


In a message-based architecture, you render any business task as a workflow, except that instead of using an ad-hoc framework to define the workflow or plain code, you determine the progress of the workflow by sending messages. 


The application layer sends a message and the command layer processes the message in much the same way that Windows, in early Windows OS. When a message, whether a command or an event is received, the command stack originates a task. The task can be a long-running state for process, as well as a single action or stateless process. A common name for such a task is saga. Commands usually don't return data back to the application layer, except perhaps for some quick form of feedback, such as whether the operation completed successfully, was refused by the system, or the reason why it failed. The application layer can trigger commands following user actions, incoming data from asynchronous streams, or other events generated by previous commands.


For a message-based system to work, some new infrastructure is required, the bus and an associated set of listeners are the main building blocks.


The core element of a message-based architecture is the workflow. The workflow is the direct descendent of user defined flowcharts. Abstracted to a saga instance, the workflow advances through messages, commands, and events. The central role played by workflows and flowcharts is the secret of such an architecture, as simple that can be easily understood even by domain experts, because it resembles flowcharts, and it can also be understood by developers, because it is task-oriented and then so close to the real business and so easy to mirror with software.



CQRS Deluxe

CQRS Deluxe is a flavor of command query separation that relies on a message-based implementation of the business tasks. The read stack is not really different from other CQRS scenarios we have considered so far, but a command stack takes a significantly different layout, a new way of doing old things, but a new way that is hopefully a lot more extensible and resilient to changes.


In this CQRS design, the application layer doesn't call out any full-fledged implementation of some workflows, but it simply turns any input it receives into a command and pushes that to a new element, the bus




The bus is generically referring to a shared communication channel that facilitates communication between software modules. The bus here is just a shared channel, and doesn't have to be necessarily a commercial product or an open source framework. It can also and simply be your own class.


At startup, the bus is configured with a collection of listeners, that is, components that just know what to do with incoming messages. There are two types of message handlers called sagas and handlers. A Saga is an instance of a process that is optionally stateful, maintain access to the bus, is persistable, and sometimes long-running. A handler instead is a simpler, one executer of any code bound to a given message. The flowchart behind the business task is never laid out entirely. It is rather implemented as a sequence of small steps, each calling out the next or raising an event to indicate that it is done. As a result, once a message is pushed to the bus, the resulting sequence of actions is partially predictable and may be altered at any time adding and removing listeners to and from the bus. Handlers end immediately, whereas sagas, which are potentially long-running, will end at some point in the future when the final message is received that ends the task that saga represents. Sagas and handlers interact with whatever family of components exist in the command stack to expose business logic algorithms.



Most likely, even though not necessarily, you'll have a domain layer here with a domain model and domain services are calling to the architecture we discussed in module four. The domain services, specifically repositories, will then interact with the data store to save the state of the system.





The use of the bus also enables another scenario, event sourcing. Event sourcing is a technique that turns detected and recorded business events into a true part of the data source of the application. When the bus receives a command, it just dispatches the message to any registered listeners, whether sagas or handlers. But when the bus receives an event from the presentation or from other sagas, it may first optionally persist the event to the event store, a log database, and then dispatch it to listeners. It should be noted that what I describe here is the typical behavior one expects from a bus when it comes to orchestrating the steps of a business task. As mentioned, CQRS Deluxe is particular just because of the innovative architecture of the command stack. The read stack instead just uses any good query code that does the job. Therefore, it means your O/RM of choice, possibly LINQ, and ad-hoc storage, mostly relational. And the issue of stale data and synchronization is still here, and in the context of a CQRS Deluxe solution, the code that updates synchronously or asynchronously, the read database can easily take the form of a handler.



CQRS Deluxe implementation


INSIDE THE BUS


The bus, in particular, is a class that maintains internally a list of known saga types, a list of running saga instances, and the list of known handlers. The bus gets messages and all it does is dispatching messages to sagas and handlers. Each saga and handler, in fact, will declare which messages they're interested in, and in this regard, the overall work of the bus is fairly simple.




A saga is characterized by two core aspects.

The command or event that starts the process

The list of commands and events that saga can handle


The resulting implementation of a saga class is then not rocket science. 


public class CheckoutSaga : Saga<CheckoutSagaData>,

IStartWith<StartCheckoutCommand>,

ICanHandle<CancelCheckoutCommand>,

ICanHandle<PaymentCompletedEvent>,

ICanHandle<PaymentDeniedEvent>,

ICanHandle<DeliveryRequestRefusedEvent>,

ICanHandle<DeliveryRequestApprovedEvent>

{

public void Handle(StartCheckoutCommand message) { ... }

...

}


The class declares messages it is interested in through multiple interfaces, and then its body is full of handle methods, one for each type of supported message.


More about Sagas

Sagas must be identified by a unique ID

Each saga must be uniquely identified by an ID. The ID can be a number of things, it can be a GUID, or more likely it is the ID of the aggregate the saga is all about. In general, a saga is a process that involves some collection of entities relevant in the business context. Any combination of values that uniquely identifies(in the context) the main act or in the process is any way a valid identifier for the saga.

Sagas might be persistent and stateful

A saga might be stateful and needing persistence. In this case,

Persistence is care of the bus

State of the associated aggregate must be persisted

Sagas might be stateless

a saga might be in some cases stateless as well. In the end, a saga is what you need it to be. If you need it to be stateless, then the saga is a mere executive of orders brought by commands or it just reacts to events.


Extending a Solution

The point behind CQRS Deluxe and sagas in the end is that it makes far easier to extend an existing solution when new business needs and new requests come up. For this extra level of flexibility, you pay the cost of having to implement a bus and a more sophisticated infrastructure for the business logic. This is exactly what I have called so far the message-based approach.


So, let's say you've got a new handling scenario for an existing event or you just got a new request for an additional feature. In this case, all you do is write a new saga or a new handler and then just register it with the bus. That's it. More importantly, you don't need to touch the existing workflows and the existing code as the pieces of the workflow are, for the most part, independent from one another.


More About the Bus

You can surely write your own bus class. Whether it is a good choice depends on the real traffic hitting the application, the optimizations, the features, and even the skills of involved developers. For sure, for example, you might need at some point to plug into the bus some queuing and/or persistence agents.


An alternative to writing your own bus class, you can look into existing products and frameworks.

NServiceBus from Particular Software

Rebus from Rebus-org

MassTransit from Pandora



CQRS Deluxe Code Inspection (SKIP)



출처

이 모든 내용은 Pluralsight에 Dino Esposito가 올린 'Modern Software Architecture: Domain Models, CQRS, and Event Sourcing'라는 강의의 다섯번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/modern-software-architecture-domain-models-cqrs-event-sourcing/table-of-contents). 제가 정리한 것보다 더 많은 내용과 Demo를 포함하고 있으며 최종 Summary는 생략하겠습니다. Microsoft 지원을 통해 한달간 무료로 Pluralsight의 강의를 들으실 수도 있습니다.

AND

Copyright

이 모든 내용은 Pluralsight에 Dino Esposito가 올린 'Modern Software Architecture: Domain Models, CQRS, and Event Sourcing'라는 강의의 네번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/modern-software-architecture-domain-models-cqrs-event-sourcing/table-of-contents). 강의 원작자분께 게시허가도 받았습니다.


Content

Discovering the Domain Architecture through DDD

The DDD Layered Architecture

The "Domain Model" Supporting Architecture

The CQRS Supporting Architecture

Event Sourcing

Designing Software Driven by the Domain



Domain Layer

One of the most common supporting architectures for bounded context is the layered architecture in which the domain model pattern is used to organize and express the business logic.


The domain layer is made of two main components, the domain model and domain services


Domain Model (assume that the model is an object model)

In the context of DDD, a domain model is simply a software model that fully represents the business domain. Most of the time a domain model is an object-oriented model, and if so, it is characterized by classes with very specific roles and very specific features, aggregates, entities, values types, and factories


Abstractly speaking, the domain model is made of classes that represent entities and values. Concretely, when you get to write such a software artifact, you identify one, or even likely more, modules. In DDD, a model is just the same as a .NET namespace that you use to organize classes in a class library project.


Aspects of a Domain Model Module

A domain-driven design module contains value objects, as well as entities. In the end, both are rendered as .NET classes, but entities and value objects represent quite different concepts and lead to different implementation details.


A value object is fully described by its attributes. The attributes of a value object never change once the instance has been created. All objects have attributes, but not all objects are fully identified by the collection of their attributes. when attributes are not enough to guarantee uniqueness, and when uniqueness is important to the specific object, then you just have domain-driven design entities.


In DDD,  As you go through the requirements and work out the domain model of a given bounded context, it is not unusual that you spot a few individual entities being constantly used and referenced together. In a domain model, the aggregation of multiple entities under a single container is called an aggregate, and therefore some entities and value objects may be grouped together and form aggregates.


DDD Value Types

To indicate a value object in .NET, you use a value type. The most relevant aspect of a value type is that it is a collection of individual values. The type therefore is fully identified by the collection of values(attributes), and the instance of the type is immutable. In other words, the attributes of a value object never change once the instance has been created, and, if it does, then the value object becomes another instance of the same type fully identified by the new collection of attributes. Value types are used instead of primitive types, such as integers and strings, because they more precisely and accurately render values and quantities found in the business domain.


e.g. let's suppose you have to model an entity that contains weather information. How would you render the member that indicates the outside temperature? For example, it can be a plain integer property. However, in this case, the property can be assigned just any number found in the full range of values that .NET allows for integers. This likely enables the assignment of values that are patently outside the reasonable range of values for weather temperature. The type integer, in fact, is a primitive type and too generic to closely render a specific business concept, such as an outside temperature. So here is a better option using a new custom value type. In the new type, you can have things like constructors, min/max constants, setters and getters, additional properties, for example, whether it's a Celsius or a Fahrenheit temperature, and even you can overload operators.


DDD Entities

Not all objects are fully identified by their collection of attributes. Sometimes you need an object to have an identity attribute, and when uniqueness is important to the specific object, then you have just entities. Put another way, if the object needs an ID attribute to track it uniquely throughout the context for the entire application life cycle, then the object has an identity and is said to be an entity.


Typically made of data and behavior

Contain domain logic, but not persistence logic


Concretely, an entity is a class with properties and methods, and when it comes to behavior, it's important to distinguish domain logic from persistence logic. Domain logic goes in the domain layer, model or services. Persistence goes in the infrastructure layer managed by domain services.

e.g. an order entity is essentially a document that lays out what the customer wants to buy. The logic associated with the order entity itself has to do with the content of the document, things like taxes and discount calculation, order details, or perhaps estimated date of payment. The entire process of fulfillment tracking status or just invoicing for the order are well outside the domain logic of the order class. They are essentially use cases associated with the entity, and services are responsible for the implementation.


DDD Aggregates

An aggregate is a collection of logically related entities and value types(A few individual entities constantly used and referenced together). More than a flat collection, an aggregate is a cluster of objects, including then relationships that you find easier to treat as a single entity for the purpose of queries and commands(Cluster of associated objects treated as one for data changes). In the cluster, there's a root object that is the public endpoint that the outside world uses to query or command. Access to other members of the aggregate is always mediated by the aggregated root. Two aggregates are separated by a sort of consistency boundary that suggests how to group entities together(Preserve transactional integrity). The design of aggregates is closely inspired by the business transactions required by the system. Consistency here means transactional consistency. Objects in the aggregate expose an overall behavior that is fully consistent with the business processes.


Now we have identified four aggregates, and the direct communication is allowed only on roots, only between roots. A notable exception here is the connection between order detail and product. Product, an aggregate root, is not allowed to access order detail and non-root elements directly. Communication is allowed, but only through the order proxy. However, it is acceptable that order detail, a child object, may reference an outside aggregate root, if, of course, the business logic just demands for that.




Domain Services

Domain services are instead responsible for things like cross-object communication(Cross-aggregate behavior), data access via repositories, and the use, if required, by the business of external services


Domain services, are made of special components, such as repositories that provide access to storage and proxies towards external web services



Misconceptions about DDD

1. Perceived simply as having an object model with some special characteristics

It now seems that DDD is nothing more than having an object model with the aforementioned special features. So the object model is expected to be also agnostic of data storage and then oversimplifying quite a bit. The database is part of the infrastructure and doesn't interfere with the design of the model. Database is merely part of the infrastructure and can be neglected.


=> Context mapping is paramount

=> Modeling the domain through objects is just one of the possible options


In DDD, identification and mapping of bounded contexts is really paramount. Modeling the domain through objects is just one of the possible options. So, for example, using a functional approach is not prohibited, and you can do good domain-driven design even if you use a functional approach to coding or even an anemic object model with stored procedures. 


=> The object model must be easy to persist

=> Persistence, though, should not be the primary concern

=> Primary concern is making sense of the business domain


For the sake of design, if you aim at building an object model, then persistence should not be your primary concern. Your primary concern is making sense of the business domain. However, the object model you design should still be persistent at some point, and when it comes to persistence, the database and the API you use it to go down to the database are a constraint and cannot always be blissfully ignored.


2. Ubiquitous Language is a guide to naming classes in the object model


=> Understand the language to understand the business

=> Keep language of business in sync with code


The ubiquitous language is the tool you use to understand the language of the business rather than a set of guidelines to name classes and methods. In the end, the domain layer is the API of the business domain, and you should make sure that no wrong call to the API is possible that can break the integrity of the domain. It should be clear that in order to stay continuously consistent with the business you should focus your design on behavior much more than on data. To make domain-driven design a success, you must understand how the business domain works and render it with software. That's why it's all about the behavior.



Persistence vs. Domain Model


Persistence Model : the model to persist data

Object-oriented model 1:1 with underlying relational data

Reliable and familiar to most developers

Doesn't include business logic (except perhaps validation)


You can express a data model looking at a bunch of SQL tables or using objects you manage through the object- relational mapper of choice, for example, Entity Framework. This is essentially a persistence model. It's object-oriented, and it's nearly 1:1 with data store of choice. Objects in the model may include some small pieces of business logic, mostly for validation purposes, but the focus of the design remains storage.


Domain Model

Object-oriented model for business logic

Persistable model

No persistence logic inside


It's still an object-oriented model, it's still a model that must be persisted in some way, but it is a model that doesn't contain any clue or a very limited awareness of the storage. The focus of the design here is behavior and business processes as understood from requirements.


In the end, it's a matter of choice. On one hand, you have a scenario in which objects are plain containers of data and business logic is hard-coded in distinct components. On the other hand, you have business logic expressed through processes and actual objects are functional to the implementation of those processes, and because of this, data and logic are often combined together in a single class.




What is behavior?

According to the dictionary it is the way in which one acts or conducts oneself, especially towards others. To move towards a behavior-centric design of the domain model, we need to set in stone what we intend by behavior.


Methods that validate the state of the object

Methods that invoke business actions to perform on the object

Methods that express business processes involving the object


In a model that exposes classes designed around database tables, you always need some additional components to perform access to those classes in a way that is consistent with business rules, yet your domain model class library has public classes with full get and set access, direct access, to the properties. In this case, there is no guarantee that consumers of the domain model will not be making direct unfiltered access to the properties with the concrete risk of breaking the integrity of data and model. A better way, a better scenario, is when you create things such as the only way to access the data is mediated by an API, and the API is exposed by the object themselves. So the API is part of the model. It's not an external layer of code you may or may not invoke. The containers of business logic are not external components, but the business logic that's the point is built right into the object. So you don't set properties, but you rather invoke methods and alter the state of objects because of the actions you invoked. This is a more effective way to deal and model the real world.


Aggregates and Value Types

In the beginning, when you start going through the list of requirements, whether you are in a classic analysis session or in an event storming session, you just find along the way entities and value types. As you go through, you may realize that some entities go often together, and together they fall under the control of one particular entity. 

When you get this, you probably have crossed the boundary that defines an aggregate. Among other things, aggregates are useful because once you use them you work with fewer objects and course grained and with fewer relationships just because several objects may be encapsulated in a larger, bigger container. And well-identified aggregates greatly simplify the implementation of the domain model.


Facts of an Aggregate

Protect as much as possible the graph of encapsulated  entities from outsider access

Ensure the state of child entities(contained objects) is always in a valid state according to the applicable business rules consistency

Actual boundaries of aggregates are determined by business rules.


How would you identify which objects form an aggregate? Unfortunately, for this there are no mathematical rules. Actual boundaries of aggregates are determined only and exclusively by analysis of the business rules. It's essentially primarily about how you intend to design the business domain and how you envision that business domain.

e.g. a customer would likely have an address, but customer and address are two entity classes for sure. Do they form an aggregate perhaps? It depends. If the address as an entity exists only to be an attribute of the customer. If so, they form an aggregate. Otherwise, they are distinct aggregates that work together.


Common Responsibilities (Associated with an Aggregate Root)

An aggregate root object is the root of the cluster of associated objects that form the aggregate. An aggregate root has global visibility throughout the domain model and can be referenced directly. It's the only part of the aggregate that can be referenced directly, and it has a few responsibilities too.


Ensure encapsulated objects are always in a consistent state

It has to ensure that encapsulated objects are always in a consistent state business-wise.

Take care of persistence for all encapsulated objects

Cascade updates and deletions through the encapsulated objects

Access to encapsulated objects must always happen by navigation

the root has to guarantee that access to encapsulated objects is always mediated and only happens through navigation by the root.


+ One repository per aggregate

Most of the code changes required to implement the responsibilities of an aggregate root occurs at the level of services and repositories, so well outside the realm of the domain model. Each aggregate root has its own dedicated repository service that implements consistent persistence for all of the objects.


But in terms of code, what does it mean to be an aggregate root?

An aggregate is essentially a logical concept. When you create a class that behaves as an aggregate, you just create a plain class. I would even say that you just have your entity classes, and then you upgrade one of those to the rank of an aggregate, but that mostly depends on how you use them, so being an aggregate is an aspect of the overall behavior and usage of the class rather than a feature you code directly.

However, especially when you have a very large domain model, it could be a good idea to use something that identifies at the code level that some classes are aggregates, and this can be a very simple, plain, in some cases, just a marker interface you can call IAggregate. The interface can be a plain marker interface, so an interface with no members, or you can add some common members you expect to be defined on all aggregates.


+ Constructor vs. Factory

It's interesting to notice the way in which classes in a domain model aggregates are initialized. Typically in an object-oriented language you use constructors. Constructors work, there's nothing really bad in terms of the functional behavior with constructors, but factories are probably better, because of the expressivity that factories allow you to achieve. In particular, I suggest you have a static class within an aggregate, you can call this class Factory, and this static class has a bunch of static public methods, CreateNew, CreateNew with Address, and as many methods as there are ways for you to create new instances of a given aggregate. The big difference between constructors and factories is essentially in the fact that on a factory class you can give a name to the method that returns a fresh new instance of the aggregate, and this makes it far easier to read the code. When you read back the code from the name of the Factory method, you can figure out the real reason why you are having a new instance of that class at that point.



Domain Services

Classes in the domain are expected to be agnostic of persistence, yet the model must be persistent. Domain services are a companion part of the domain layer architecture that coordinates persistence and other dependencies required to successfully implement the business logic.


Domain services are classes whose methods implement the domain logic that doesn't belong to a particular aggregate and most likely span over multiple aggregates. Domain services coordinate the activity of the various aggregates and repositories with the purpose of implementing all business actions, and domain services may consume services from the infrastructure, such as when they need to send an email or a text message.


Domain services are not arbitrarily pieces of code you create because you think you need them. Domain services are not helper classes. All actions implemented by domain services come directly straight from requirements and are approved by domain experts. But there's more. Even names used in domain services are strictly part of the ubiquitous language.


Let's see a couple of examples. Suppose you need to determine whether a given customer at some point reached the status of a gold customer. What do requirements say about gold customers? Let's say let's suppose that a customer earns the status of gold after she exceeds a given threshold of orders on a selected range of products. Nice. This leads to a number of distinct actions and aspects. First, you need to query orders and products to collect data necessary to evaluate the status. As data access is involved at this stage, this is not the right job for an aggregate to do. The final piece of information, whether or not the customer is gold, is then set as a Boolean value typically in any new instance of a customer class. The overall logic necessary to implement this feature, as you can see, spans across multiple aggregates and is strictly business oriented.

But here is another example. Booking a meeting room. What do requirements say about booking a room? Booking requires, for example, verifying the availability of the room and processing the payment. We have two options here, different, but equally valid from a strictly functional perspective. One option is using a booking domain service. The service will have something like an Add method that reads the member credit status, checks the available rooms, and then based on that decides what to do. The other option we have entails having a booking aggregate. In this case, the aggregate may encapsulate entities like room and member objects, which, for example, in other bounded contexts in the same application, for example the admin context, may be aggregate roots themselves. The actual job of saving the booking in a consistent manner is done in this case by the repository of the aggregate.


What's a repository?

In domain-driven design, a repository is just the class that handles persistence on behalf of entities and ideally aggregate roots. Repositories are the most popular type of a domain service and the most used. A repository takes care of persisting aggregates, and you have one repository per aggregate. Any assemblies with repositories have a direct dependency on data stores. Subsequently, a repository is just the place where you actually deal with things like connection strings and where you use SQL commands.


You can implement repositories in a variety of ways, and you can find out there are a million different examples, each claiming that that's the best and most accurate way of having repositories. I'd like to take here a low-profile position. There's nearly no wrong way to write a repository class.


public interface IRepository<T> where T : IAggregateRoot

{

//You can keep the interface a plain marker or

//you can have a few common methods here.


T Find (object id);

bool Save (T item);

bool Delete (T item);

}


You typically start from an IRepository generic interface and decide whether you'd like to have a few common methods there. But this, like many other implementations, are just arbitraries, and it's a matter of preference and choice. In domain-driven design, in summary, a repository is just the class that handles persistence on behalf of aggregate roots.



Events in the Business Domain (Why Should You Consider Events in a Domain Layer?)

Events in the context of the domain layer are becoming increasingly popular, so the question becomes should you consider events then? First and foremost, events are optional, but they are just a more effective and resilient way to express sometimes the intricacy of some real-world business domains.


Imagine the following scenario. In an online store application, an order is placed and is processed successfully by the system, which means that the payment is okay. Delivery order was passed and received by the shipping company, and the order was then generated and inserted in the system. Now what? Let's suppose that the business requirements want you to perform some special tasks upon the creation of the order. The question becomes now where would you implement such tasks?


The first option is just concatenating the code that implements additional tasks to the domain service method that performed the order processing. You essentially proceed through the necessary steps to accomplish the checkout process, and if you succeed you then, at that point, execute any additional tasks. It all happens synchronously and is coded in just one place.


void Checkout(ShoppingCart cart)

{

//Proceed through the necessary steps

...

if (success)

{

// Execute task(s)

}

}


What can we say about this code? It's not really expressive. It's essentially monolithic code, and should future changes be required, you would need to touch the code of the service to implement the changes with the risk of making the domain service method quite long and even convoluted. But there is more. This fact might even be classified as a subtle violation of the ubiquitous language. The adverb, when, you may find in the language typically refers to an event and actions to take when a given event in the business is observed.


What about events then? Events would remove the need of having whole handling code in a single place and also bring a couple of other non-trivial benefits to the table. The action of raising the event is distinct from the action of handling the event, which might be good for testability, for example. And second, you can easily have multiple handlers to deal with the same event independently. 


public class GoldMemberStatusReached : IDomainEvent

{

public GoldMemebrStatusReached(Customer customer)

{

Customer = customer;

}


public Customer Customer { get; set; }

}


The event can be designed as a plain class with just a marker interface to characterize that, and this could be as in the demo, IDomainEvent. This is analogous to event classes as we find them in the .NET Framework. To be honest, you could even use EventArgs, the .NET Framework root class for events as the base class if you wish. Not using .NET native classes is mostly done because of the ubiquitous language and to stay as close as possible to the language of the business.


void Checkout(ShoppingCart cart)

{

//Proceed through the necessary steps

...

if (success)

{

// Execute task(s)

Bus.RaiseEvent(new GoldMemberStatusReached(customer));

}

}


And then, once you have this event class defined in the checkout domain service method after you've gone through all of the business steps, you just raise the event with all the necessary information you want to pass along. Yes, but how would you dispatch events?


There's a bus. The bus is an external component typically part of the infrastructure, and it is acceptable in general that a domain model class library has a dependency on the infrastructure. The bus allows listeners to register and notify them transparently for the code. The bus can be a custom class you create, or it can be some sort of professional commercial class like end service bus or maybe rebus. There is more flexibility in your code when you use events as you can even record events and log what happened and can add and remove handlers for those events quite easily.


Events are gaining more and more importance in software design in architecture these days, well beyond the domain model as a way to express the business logic. Events are part of the real world and have to do with business processes much more than they have to do with business entities. Events help significantly to coordinate actions within a workflow and to make use case workflows a lot more resilient to changes. This is the key fact that is today leaning towards a different supporting architecture, event sourcing, an alternative to the domain model supporting architecture.



Anemic Models

Anti-pattern because "it takes behavior away from domain objects"


The domain model pattern is often contrasted to another pattern known as the anemic domain model. For some reason, the anemic domain model is also considered a synonym of an anti-pattern. Is it true? Sad that software architecture is the triumph of shades of gray and nothing is either black or white, I personally doubt that today the anemic domain model is really an anti-pattern. In an anemic domain model, all objects may still match conventions of the ubiquitous language and some of the domain-driven design guideline for object modeling like value types over primitive types and relationships between objects.


The inspiring principle of the anemic domain model is that you have no behavior in entities, just properties, and all of the required logic is placed in a set of service components that altogether contain the domain logic. These services orchestrate the application logic, the use cases, and consume the domain model and access the storage.


Again, is this really an anti-pattern today with the programming tools of today? Let's consider Entity Framework and Code First. When you use Code First, you start by defining an object model. The model you create, is it representative of the domain? I'm not sure. I'd say that it is rather a mix of domain logic and persistence. The model you create with Code First must be used by Entity Framework, so Entity Framework must like it. To me, this sounds more like a persistence model than a domain model. Sure you can add behavior, read methods to the classes in the Code First model, and there is no obvious guarantee, however, that you will be able to stay coherent with the requirements of the ubiquitous language and still keep Entity Framework happy. And in case of conflicts, because you still need persistence, Entity Framework will win, and compromises will be in order. The database at the end of the day is part of the infrastructure, it doesn't strictly belong to any model you create, but because of the persistence requirement it is still a significant constraint you cannot ignore. An effective persistence model is always necessary because of the functions to implement and because of the performance. Sometimes when compromises are too expensive you might want to consider having a second distinct model in addition to persistence, so you want to distinguish between the domain model and persistence model and use adapters to switch between the two. An alternative, you can completely ignore the domain model, keep your persistence model you create with Code First DB-friendly, and then use different patterns than the domain model pattern to organize the business logic of the application. In the end, having implementing the domain model pattern is not at all a prerequisite for doing good domain-driven design. And the model you end up with when using Code First and Entity Framework is hardly a domain model. It is a lot more anemic then you may think at first.



Beyond Single All-encompassing Domain Models

This is close to be a Murphy law. If a developer can use an API the wrong way, he will. How to avoid that? With proper design, I would say. All of the facts explained in Eric Evan's book about domain-driven design written about a decade ago have the word object and implicitly refer to an object- oriented world, the domain model. So far in this course I tried to push hard the point that the foundation of domain-driven design is agnostic of any development paradigm in much the same way it is agnostic on the persistence model. I believe that the real direction we're moving today is pushing the same idea of a domain model, single, all-encompassing model for the entire business domain to the corner. I rather see that it is more and more about correctly identifying and rendering business processes with all of their related data and events. You can certainly do that using classes that, like aggregates, retain most of the business logic and events, so you can certainly model business processes using the object-oriented paradigm. But, and that's really interesting, you can also do that today using functional languages. If you search around, you will find a lot of references already about using F# to implement business logic. Many of the references are still vague, but some points emerge clearly. With a functional approach, for example, we don't need to go about coding ourselves restrictions in a domain model such as using value types. That's the norm in a functional language. And also composition of tasks via functions often result naturally in so simple code that it can even be read to and understood by domain experts. The foundation of domain-driven design at the very end of the day is that ubiquitous language as a tool to discover the business needs and come up with a design driven by the domain in which presentation commands places orders, orders commands are executed, and somewhere, somehow, data is saved. For this way of working for this model, more effective practices and architectures are emerging. One is CQRS.



출처

이 모든 내용은 Pluralsight에 Dino Esposito가 올린 'Modern Software Architecture: Domain Models, CQRS, and Event Sourcing'라는 강의의 네번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/modern-software-architecture-domain-models-cqrs-event-sourcing/table-of-contents). 제가 정리한 것보다 더 많은 내용과 Demo를 포함하고 있으며 최종 Summary는 생략하겠습니다. Microsoft 지원을 통해 한달간 무료로 Pluralsight의 강의를 들으실 수도 있습니다.

AND

Copyright

이 모든 내용은 Pluralsight에 Dino Esposito가 올린 'Modern Software Architecture: Domain Models, CQRS, and Event Sourcing'라는 강의의 세번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/modern-software-architecture-domain-models-cqrs-event-sourcing/table-of-contents). 강의 원작자분께 게시허가도 받았습니다.


Content

Discovering the Domain Architecture through DDD

The DDD Layered Architecture

The "Domain Model" Supporting Architecture

The CQRS Supporting Architecture

Event Sourcing

Designing Software Driven by the Domain


Outline

The Layers of a Software System

The Presentation Layer

The Application Layer

The Business Logic

Patterns for Organizing the Business Logic

The Domain Layer

The Infrastructure Layer



Segments of Code

Layer : a logical container for a portion of code

Tier : physical container for code



Modern application architecture

First. Build a model for the domain

Leveraging the strategic patterns of DDD, such as ubiquitous language and bounded context

Then. Layered Architecture with standard segments

Presentation(User experience), Application(Use-cases), Domain(Business logic), Infrastructure(Persistence)




The Presentation Layer

The presentation layer is responsible for providing some user interface to accomplish any tasks. Whatever command is issued from the presentation layer hits the application layer and from there is routed through the various remaining layers of the system.

Presentation can be seen as a collection of screens and each screen is populated by a set of data and any action that starts from the screen forwards another well-designed set of data back to the screen, the originating screen, or even some other screen.

Generally speaking we'll refer to any data that populates the presentation layer as the view model, and we'll refer to any data that goes out of the screen and triggers an action as the input model. Even though a logical difference exists between the two models, most of the time view model and input model just coincides.


The presentation layer is most critical part of modern applications

Responsible for providing the user interface to accomplish any required tasks

Responsible for providing an effective, smooth and pleasant user experience


Good attributes of a presentation layer are...

Task-based nature

Device-sensitive and friendly

User-friendly

Faithful to real-world processes



The Application Layer

The application layer is an excellent way to separate interfacing layers, such as the presentation layer and the domain layer. In doing so, the application layer gives an incredible contribution of clarity to the entire design of the application.


The application layer is just where you orchestrate the implementation of the applications use cases


The Application Layer

Reports to the presentation

Serves ready-to-use data in the required format

Orchestrates tasks triggered by presentation elements by presentation element

Use-cases of the application's frontend

Doubly-linked with presentation

Possibly extended or duplicated when a new frontend is added


The link between the presentation and application layer should be established right when you design the user interface, because each form you display has an underlying data model that becomes the input of the methods invoked in the application layer. In addition, the result of each application layer method is just the content  that you use the fill up the next screen displayed to the user. Application layer is absolutely necessary, but it strictly depends on what you actually display to users and what you display to users has to guarantee a fantastic user experience. This is why a top-down approach to software development is today absolutely crucial.



The Business Logic

In the design of a software system, initially you go through three key phases

1. Getting to know as much as possible about the business domain, splitting the domain into simpler subdomains

2. Learn the language of the business domain

3. Split the business domain in bounded contexts


Then... What's Next?

4. It's all about implementing all business rules and organizing the business logic in software components


Business Logic (An Abstract Definition)

The business logic of a software system is made of two main parts

1. Application logic

Dependent on use-cases : Application entities, Application workflow components

In DDD

Data transfer objects : containers of data being moved around to and from presentation screens

Application services : the components that coordinate tasks and workflows

2. Domain logic

Invariant to use-cases : Business entities, Business workflow components

In DDD

Domain model : for the entities that hold data in some behavior and domain services for any remaining behavior

Domain services : for any remaining behavior and data that for some reason don't fit in the entities


In general terms, both application and domain logic are made of entities to hold data and the workflows to orchestrate behavior


What is Domain logic?

Domain logic is all about how you bake business rules into the actual code

A business rule is any statement that explains in detail the implementation of a process or describes a business related policy to be taken into account.



Three Common Patterns for organizing the business logic

1. Transaction script

The transaction script pattern is probably the simplest possible for business logic, and it is entirely procedural.


a. System actions - Each procedure handles a single task

The word script indicates that you logically want to associate a sequence of system carried actions, namely a script, with each user action.

b. Logical transaction - end-to-end from presentation to data

The word transaction in this context has very little to do with database transactions, and it generically indicates a business transaction you carry out from start to finish within the boundaries of the same software procedure.

c. Common subtasks - split into bounded sub-procedures for reuse

As is, transaction script as a pattern has some potential for code duplication. However, this aspect can be easily mitigated identifying common subtasks and implementing them through reusable routines.


In terms of architectural design, the transaction script pattern leads to a design in which actionable UI elements in the presentation layer invoke application layers end points and these end points run a transaction script for just each task.


2. Table module

As the name suggests, the table module pattern heralds a more database-centric way of organizing the business logic. The core idea here is that the logic of the system is closely related to persistence and databases. 


a. One module per table in the database

So, the table module pattern suggests you have one business component for each primary database table.

b. Module contains all methods that will process the data - Both queries and commands

The component exposes end points through which the application layer can execute commands and queries against a table, say orders, and the link or just related tables say order details.

c. May limit modules to "significant" tables - Tables with only outbound foreign-key relationships

In terms of architectural design, the table module pattern leads to a design in which presentation calls into the application layer and then the application layer for each step of the workflow identifies the table involved, finds the appropriate module component, and works with that.


3. Domain model

The term domain model is often used in DDD. However, in DDD, domain model is quite a generic term that refers to having a software model for the domain. More or less in the same years in which Eric Evans was using the term domain model in the context of the new DDD approach, Martin Fowler was using the same term, domain model, to indicate a specific pattern for the business logic. Fact is, the pattern, the main model is often using DDD, though it's not strictly part of the DDD theory. 


a. Aggregated objects - Data and behavior

The domain model pattern suggests that architects focus on the expected behavior of the system and on the data flows that make it work. When implementing the pattern, at the end of the day you build an object model, but the domain model pattern doesn't simply tell you to code a bunch of C# or Java classes. The whole point of the domain model pattern is hitting an object-oriented model that fully represents the behavior and the processes of the business domain. When implementing the pattern, you have classes that represent live entities in the domain. These classes expose properties and methods, and methods refer to the actual behavior and the business rules for the entity. Aggregate model is a term used in domain-driven design to refer to the core object of a domain model, and we'll see more about aggregates in the next module.

b. Persistence agnostic

c. Paired with domain services

The classes in the domain model should be agnostic to persistence and paired with service classes that contain just the logic to materialize instances of classes to and from the persistence layer. A graphical schema of a domain model has two elements, a model of aggregated objects and services to carry out specific workflows that span across multiple aggregates or deal directly with persistence.


Where do you start designing the business logic of the real-world system? The basic decision is just one.

Would you go with an object-oriented design? functional design? Or just a procedural approach?



The Domain Layer

In the domain-driven design of a software application, the business logic falls in the segment of the architecture called domain layer.


Domain Layer - Logic invariant to use-cases

Domain model

business domain

not necessarily an implementation of the aforementioned domain model pattern

Domain services

related and complementary set of domain-specific services

the primary responsibility of domain services is persistence



Models for the business domain

In the implementation of a domain layer, as far as the model is concerned, we have essentially two possible flavors.


Domain Model

1. Object-oriented entity model (entity model for short)

An entity model has two main characteristics.

a. DDD conventions (factories, value types, private setters)

Classes follow strict DDD conventions, which means that for the most part these classes are expected not to have constructors, but factories, use value types over primitive types, and avoid private setters on properties.

b. Data and behavior

These classes are expected to expose both data and behavior.


Anemic model

Often considered an anti-pattern, the anemic domain model is yet another possibility for coding business logic.


Plain data containers

Behavior and rules moved to domain services


An object-oriented domain model is commonly defined anemic if it's only made of data container classes with only data and no behavior. In other words, an anemic class just contains data and the implementation of workflows and business rules is moved to external components, such as domain services.


2. Functional model

tasks are expressed as functions.



Domain Services

Domain services complement the domain model and contain those pieces of domain logic that just don't fit into any of the other created entities. This covers essentially two scenarios. One is classes that group logically related behaviors that span over multiple entities. The other is the implementation of processes that require access to the persistence layer for reading and writing and access to external services, including legacy code.



The Infrastructure Layer

Set of the fundamental facilities needed for the operation of a software system


Fundamental Facilities of Software Systems

Database (more in general, Persistence is crucial then others below)

Security, Logging & Tracing, Inversion of Control, Caching, Networks...


When it comes to the infrastructure layer, I like to call it as the place down where the technologies belong. So, necessary to fuel the entire system, but not binding the system to any specific products, the infrastructure layer is where you start dealing with configuration details, things like connection strings, file paths, TCP addresses, or URLs. To keep the application decoupled from specific products, you sometimes want to introduce facades to hide technology details while keeping the system resilient enough to be able to replace technologies at any time in the future with limited effort and costs.



출처

이 모든 내용은 Pluralsight에 Dino Esposito가 올린 'Modern Software Architecture: Domain Models, CQRS, and Event Sourcing'라는 강의의 세번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/modern-software-architecture-domain-models-cqrs-event-sourcing/table-of-contents). 제가 정리한 것보다 더 많은 내용과 Demo를 포함하고 있으며 최종 Summary는 생략하겠습니다. Microsoft 지원을 통해 한달간 무료로 Pluralsight의 강의를 들으실 수도 있습니다.

AND

Copyright

이 모든 내용은 Pluralsight에 Dino Esposito가 올린 'Modern Software Architecture: Domain Models, CQRS, and Event Sourcing'라는 강의의 두번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/modern-software-architecture-domain-models-cqrs-event-sourcing/table-of-contents). 강의 원작자분께 게시허가도 받았습니다.


Content

Discovering the Domain Architecture through DDD

The DDD Layered Architecture

The "Domain Model" Supporting Architecture

The CQRS Supporting Architecture

Event Sourcing

Designing Software Driven by the Domain


Outline

Typical DDD flowchart

Common Summary of DDD

Ubiquitous Language

Defining the Ubiquitous Language

Bounded Contexts

Discovering Bounded context

Context Mapping

Event storming



Typical DDD flowchart

1. Crunch knowledge about the domain

2. Recognize subdomains

3. Design a rich domain model

For each recognized subdomain, design a rich object model that, regardless of concerns like persistence and databases, describe how involved entities behave and our action by user

4. Code by telling objects in the domain model what to do



Common Summary of DDD

DDD is all about building an object model for the business domain called a domain model, and consuming the model in a context of a multilayered architecture(4 layers, business logic split and renamed Application layer and Domain layer).

However the best part of DDD is in the tools it provides to understand and make sense of the domain.


DDD has two distinct parts. You always need one but can happily ignore the other.

DDD has an analytical part that essentially sets an approach to express the top level architecture ideal for the business domain you are considering. The top level architecture is expressed in the terms of constituent elements subdomains that are referred to as bounded context in accordance with the specific DDD jargon. Valuable to everybody and every project.


DDD has a strategic part that instead relates to the definition of a supporting architecture for each of the identified bounded contexts. One of many possible supporting architectures



Ubiquitous Language

Vocabulary of domain-specific terms

Nouns, verbs, adjectives, idiomatic expressions and even adverbs

Shared by all parties involved in the project

Primary goal of avoiding misunderstandings and bad assumptions

Used in all forms of spoken and written communication

Universal language of the business as done in the organization


The ubiquitous language is the language of the domain model being built, but it's very close to the natural language of the business domain. It's not artificially created, but it just comes out of interviews and brainstorming sessions. Unambiguous and fluent.

(People use different languages, Common terminology, Help making sense of user requirements)


Structure

List of terms saved to documents

Glossary of terms fully explained

Mad available to everyone

Part of the project documentation


Continuously updated

Responsibility of the development team

If ubiquitous changes, the model should be changed and subsequently code also changed



Defining the Ubiquitous Language

"Discovering the ubiquitous language

leads you to understand the business domain

in order to design a model"

PS: Any model that works. Not necessarily an object-oriented model. It can be, for example, a functional model where no classes are used at all, but functions and stored procedures to deal with data.


Start from User Requirements!

Discovery of the terms that make up the ubiquitous language starts from user requirements. Get Noun and Verb from it


Ubiquitous language == Words and verbs that truly reflect the semantics of the business domain


At Work Defining the Ubiquitous Language

Different concepts named differently.

Matching concepts named equally.

to communicate each other without misunderstanding between co-workers

The ubiquitous language is neither the role language of the business nor the language of development


Naming convention is critical : Classes, Members, Namespaces



Bounded Contexts

Delimited space where an element has a well-defined meaning

Any elements of the ubiquitous language

Beyond the boundaries of the context, the language changes

Each bounded context has its own ubiquitous language

Business domain split in a web of interconnected contexts

Each context has its own architecture and implementation


Bounded contexts in DDD serve three main purposes

Remove ambiguity and duplication

Simplify design of software modules

Integration of external components



Discovering Bounded context

A bounded context is an area of the domain model that has its own ubiquitous language, its own independent implementation based on a supporting architecture, such as CQRS, and a public documented interface to interact with other bounded contexts.



Context Mapping

Context map is the diagram that provides a comprehensive view of the system being designed

Relationship between bounded contexts


Direction of relationship

Upstream context influence downstream context


Relationships

Conformist

Downstream context depends on upstream context, No negotiation possible

Customer/Supplier

Customer context depends on supplier context

Chance to raise concerns and have them addressed in some way

Partner

Mutual dependency between the two contexts, which depend on each other for the actual delivery of the code

Shared Kernel

Shared model that can't be changed without consulting teams in charge of contexts that depend on it

Anti-corruption Layer

Additional layer giving the downstream context a fixed interface no matter what happens in the upstream context



Event storming

Prerequisite : developers and domain experts, Meeting room with Board, Sticky notes, marker etc

An event storming session consists in talking about observable events in the business domain and listing them to the wall or whiteboard.

A sticky note of a given color is appointed on the modeling surface when an event is identified.




출처

이 모든 내용은 Pluralsight에 Dino Esposito가 올린 'Modern Software Architecture: Domain Models, CQRS, and Event Sourcing'라는 강의의 두번째 챕터를 듣고 정리한 것입니다(https://app.pluralsight.com/library/courses/modern-software-architecture-domain-models-cqrs-event-sourcing/table-of-contents). 제가 정리한 것보다 더 많은 내용과 Demo를 포함하고 있으며 최종 Summary는 생략하겠습니다. Microsoft 지원을 통해 한달간 무료로 Pluralsight의 강의를 들으실 수도 있습니다.

AND

ARTICLE CATEGORY

분류 전체보기 (56)
Programming (45)
MSDN (4)
개발노트 (2)
reference (5)

RECENT ARTICLE

RECENT COMMENT

CALENDAR

«   2024/04   »
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30

ARCHIVE