Asp flat file database




















Ask a question. Quick access. Search related threads. Remove From My Forums. Answered by:. Archived Forums. Sign in to vote. Friday, May 23, AM. Friday, May 23, PM. User posted Excellent. Saturday, May 24, AM. InsertOne and InsertOneAsync will insert a new item to the collection. Method returns true if insert was successful.

InsertMany and InsertManyAsync will insert a list of items to the collection. Insert -methods will update the inserted object's Id -field, if it has a field with that name and the field is writable.

If the Id -field is missing from the dynamic object, a field is added with the correct value. If an Anonymous type is used for insert, id will be added to the persisted object if the id -field is missing. If the id is present, then that value will be used. If the id -field 's type is a number , value is incremented by one. If the type is a string , incremented value number is added to the end of the initial text.

If collection is empty and the type of the id-field is number, then first id will be 0. If type is string then first id will be "0".

ReplaceOne and ReplaceOneAsync will replace the first item that matches the filter or provided id-value matches the defined id-field. Method will return true if item s found with the filter. ReplaceMany and ReplaceManyAsync will replace all items that match the filter. ReplaceOne and ReplaceOneAsync have an upsert option. If the item to replace doesn't exists in the data store, new item will be inserted.

Upsert won't update id, so new item will be inserted with the id that it has. UpdateOne and UpdateOneAsync will update the first item that matches the filter or provided id-value matches the defined id-field. Properties to update are defined with dynamic object. Dynamic object can be an Anonymous type or an ExpandoObject. UpdateMany and UpdateManyAsync will update all items that match the filter.

Update can also update items from the collection and add new items to the collection. This is becauses dictionaries and objects are similar when serialized to JSON, so serialization creates an ExpandoObject from Dictionary. If the update ExpandoObject is created manually, then the Dictionaries content can be updated. Unlike List , Dictionary 's whole content is replaced with the update data's content.

DeleteOne and DeleteOneAsync will remove the first object that matches the filter or provided id-value matches the defined id-field. Method returns true if item s found with the filter or with the id. DeleteMany and DeleteManyAsync will delete all items that match the filter. Method returns true if item s found with the filter. If Id-property is integer, last item's value is incremented by one. If field is not an integer, it is converted to a string and number is parsed from the end of the string and incremented by one.

Data store supports single items. Items can be value and reference types. Single item supports dynamic and typed data. Arrays are concidered as single items if they contain value types. If Array is empty it is listed as a collection.

Typed data will throw KeyNotFoundException if key is not found. The server will also buffer database write operations so that if it detects that a lot of write operations are going to happen in a short space of time it will wait and then do a batch write.

This is possible because the changes to the file system will still be there is the database crashes, so it could just reload all changes on restart. I will be working at a byte level to reduce the file size. The main problem will be fragmentation when a record is deleted. Because of the variable length fields, you can't simply add a new record in that place. I could de-fragment the file when it gets to a certain fragmentation level ratio of deleted records to records , but I'd rather avoid this if I can as this will be an expensive operations for the clients.

I'd rather not use fixed length fields either as the filename could be huge for instance , but these seem to me by only options? Yes, I am looking a reinventing the wheel and yes I know I probably won't come anything close to the performance of other databases. My suggestion would be to design and build your database. Don't worry about performance. Worry about reliability first and foremost. On a modern PC, you can read flat files fast enough. Separate flat files for each table is a good design.

If the flat file is small enough domain tables you could read them once and keep the table in memory. You'd write the table once, on database shutdown. I wouldn't get too concerned about this right away. Your database needs to be reliable. Most modern PC's have plenty enough disk space that this is another feature that can be put off until later. Database reorganizations are usually under DBA control, because it's such an expensive process.

This is my incomplete project a few years ago. I can't explain it more, it's already obsolete, but it' worth experimenting with which I didn't do actually. It's totally a disorganized database flat-file in first impression but in my theory it's not the worst case scenario. Anyone can add up concepts or improvements to this such as encryption, speed enhancement, data binding, data formating, etc.

By structure, Data are sorted into folders, files, etc. I also think that closing the file connection every time a query is executed will save memory.

Please deal with me, that this was proposed a few years ago, so I still use ASP. I call this concept folder-file data delegation, which data is structured into hierarchies by folder and file and the smallest structures in the database are called atoms. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Collectives on Stack Overflow. Learn more.



0コメント

  • 1000 / 1000