MetaPatentTactilemessagesinanextendedrealityenvironment

Patent:Tactilemessagesinanextendedrealityenvironment

PublicationNumber:20230393659

PublicationDate:2023-12-07

Assignee:MetaPlatformsTechnologies

Abstract

Techniquesforsendingandreceivingtactilemessages(e.g.,hapticemojis)inanextendedrealityenvironmenttofacilitatetouchcommunicationbetweenusers.Inoneparticularaspect,anextendedrealitysystemisprovidedhavingahead-mounteddevicewithadisplaytodisplaycontenttoafirstuser,sensorstocaptureinputdata,processors,andmemoriesaccessibletotheprocessors,thememoriesstoringinstructionsexecutablebytheprocessorstoperformprocessingincluding:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser,extractingfeaturesfromtheinputdatathatcorrespondtoanelectroniccommunication,identifyinganemojifromalexiconofemojisbasedontheextractedfeatures,obtainingdigitalassetsfortheemoji,wherethedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput,andtransmittingthedigitalassetstoadeviceofaseconduser.

Claims

Whatisclaimedis:

1.Anextendedrealitysystemcomprising:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors;andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser;extractingfeaturesfromtheinputdatathatcorrespondtoanelectroniccommunication;identifyinganemojifromalexiconofemojisbasedontheextractedfeatures;obtainingdigitalassetsfortheemoji,whereinthedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput;andtransmittingthedigitalassetstoadeviceofaseconduser.

2.Theextendedrealitysystemofclaim1,whereintheextractingthefeaturescomprises:determiningcharacteristicsoftheinputdata,andidentifyingpatternswithintheinputdatathatcorrespondtoakeyorattributesofelectroniccommunicationbasedonthecharacteristics,thekeyorattributesbeingtheextractedfeatures;andwhereintheidentifyingtheemojicomprises:constructingaqueryusingtheextractedfeaturesasparametersofthequery,andexecutingthequeryonthelexiconofemojis.

3.Theextendedrealitysystemofclaim1,whereinthehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.

4.Theextendedrealitysystemofclaim1,whereinthedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.

5.Theextendedrealitysystemofclaim1,whereintheprocessingfurthercomprisesobtainingadditionalinformationbasedontheemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andtransmittingtheadditionalinformationtothedeviceoftheseconduser.

6.Anextendedrealitysystemcomprising:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors;andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser;predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata;andtransmittingthehapticsignalordigitalassetsforthehapticemojitoadeviceofaseconduser.

7.Theextendedrealitysystemofclaim6,whereintheprocessingfurthercomprisesobtainingthedigitalassetsforthehapticemoji,andthedigitalassetscomprisethehapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.

8.Theextendedrealitysystemofclaim6,whereinthehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.

9.Theextendedrealitysystemofclaim6,whereinthedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.

10.Theextendedrealitysystemofclaim7,whereintheprocessingfurthercomprisesobtainingadditionalinformationbasedontheemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andtransmittingtheadditionalinformationtothedeviceoftheseconduser.

12.Theextendedrealitysystemofclaim11,whereintheparametersoftheoneormoreactuatorsignalsincludeinformationonpressure,temperature,texture,sheerstress,time,space,oracombinationthereof.

13.Theextendedrealitysystemofclaim11,whereintheprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationontheinterval,pitch,amplitude,oracombinationthereofforthehapticsignalinaccordancewithpreferencesofthefirstuser.

14.Theextendedrealitysystemofclaim12,whereintheprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationonthepressure,temperature,texture,sheerstress,time,space,oracombinationthereoffortheoneormoreactuatorsignalsinaccordancewithpreferencesofthefirstuser.

15.Theextendedrealitysystemofclaim11,whereintheprocessingfurthercomprisesobtainingadditionalinformationbasedonanemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andthehapticoutputisgeneratedwithvirtualcontent,whichisgeneratedandrenderedbythehead-mounteddeviceinanextendedrealityenvironmentdisplayedtothefirstuserbasedontheadditionalinformation.

16.Theextendedrealitysystemofclaim11,whereinthehapticsignalispredictedbasedoninputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata,andtheinputdataiscapturedfromahead-mounteddeviceoftheseconduser.

17.Theextendedrealitysystemofclaim11,whereinthehapticsignalispartofdigitalassetsobtainedforanemojiidentifiedfromalexiconofemojis.

18.Theextendedrealitysystemofclaim17,whereintheemojiisidentifiedfromalexiconofemojisbasedonextractedfeaturesfrominputdatathatcorrespondtoanelectroniccommunication,andtheinputdataiscapturedfromahead-mounteddeviceofaseconduser.

19.Theextendedrealitysystemofclaim17,whereinthedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.

20.Theextendedrealitysystemofclaim17,whereinthehapticsignalfortheemojiistransmittedtothehead-mounteddeviceofthefirstuser.

Description

CROSS-REFERENCETORELATEDAPPLICATION

Thepresentapplicationisanon-provisionalapplicationofandclaimsthebenefitandpriorityunder35U.S.C.119(e)ofU.S.ProvisionalApplicationNo.63/365,689,filedJun.1,2022,theentirecontentsofwhichisincorporatedhereinbyreferenceforallpurposes.

FIELD

Thepresentdisclosurerelatesgenerallytohapticcommunicationinanextendedrealityenvironment,andmoreparticularly,totechniquesforsendingandreceivingtactilemessages(e.g.,hapticemojis)inanextendedrealityenvironmenttofacilitatetouchcommunicationbetweenusers.

BACKGROUND

BRIEFSUMMARY

Techniquesdisclosedhereinrelategenerallytohapticcommunicationinanextendedrealityenvironment.Morespecificallyandwithoutlimitation,techniquesdisclosedhereinrelatetosendingandreceivingtactilemessages(e.g.,hapticemojis)inanextendedrealityenvironmenttofacilitatetouchcommunicationbetweenusers.Hapticemojisorreactionsaretactilemessagesthatcanbesentandreceivedthroughoutthedaywithawearabledevice(e.g.,hapticgloveorwristband).Eachhapticemojiorreactionmaybeaccompaniedbyaudioand/orvisualcomponentstohelptrainauseronthehapticsignals.Thetactilemessagescanbesentthroughtraditionaluserinterfaces,hapticfirstinterfaces,ormoreexpressivegesturessuchasahandwave,whereinthisexampletherecipientmayfeelahapticpatterntomimicawavemotion.

Invariousembodiments,anextendedrealitysystemisprovidedthatincludes:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors,andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser,extractingfeaturesfromtheinputdatathatcorrespondtoanelectroniccommunication,identifyinganemojifromalexiconofemojisbasedontheextractedfeatures,obtainingdigitalassetsfortheemoji,whereinthedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput,andtransmittingthedigitalassetstoadeviceofaseconduser.

Insomeembodiments,theextractingthefeaturescomprises:determiningcharacteristicsoftheinputdata,andidentifyingpatternswithintheinputdatathatcorrespondtoakeyorattributesofelectroniccommunicationbasedonthecharacteristics,thekeyorattributesbeingtheextractedfeatures,andtheidentifyingtheemojicomprises:constructingaqueryusingtheextractedfeaturesasparametersofthequery,andexecutingthequeryonthelexiconofemojis.

Insomeembodiments,thehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.

Insomeembodiments,thedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.

Insomeembodiments,theprocessingfurthercomprisesobtainingadditionalinformationbasedontheemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andtransmittingtheadditionalinformationtothedeviceoftheseconduser.

Invariousembodiments,anextendedrealitysystemisprovidedthatincludes:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser,oneormoreprocessors,andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser,predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata,andtransmittingthehapticsignalordigitalassetsforthehapticemojitoadeviceofaseconduser.

Insomeembodiments,thehapticemojiispredictedandtheprocessingfurthercomprisesobtainingthedigitalassetsforthehapticemoji,andthedigitalassetscomprisethehapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.

Insomeembodiments,theparametersoftheoneormoreactuatorsignalsincludeinformationonpressure,temperature,texture,sheerstress,time,space,oracombinationthereof.

Insomeembodiments,theprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationontheinterval,pitch,amplitude,oracombinationthereofforthehapticsignalinaccordancewithpreferencesofthefirstuser.

Insomeembodiments,theprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationonthepressure,temperature,texture,sheerstress,time,space,oracombinationthereoffortheoneormoreactuatorsignalsinaccordancewithpreferencesofthefirstuser.

Insomeembodiments,theprocessingfurthercomprisesobtainingadditionalinformationbasedonanemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andthehapticoutputisgeneratedwithvirtualcontent,whichisgeneratedandrenderedbythehead-mounteddeviceinanextendedrealityenvironmentdisplayedtothefirstuserbasedontheadditionalinformation.

Someembodimentsofthepresentdisclosureincludeacomputer-implementedmethodcomprisingpartorallofoneormoremethodsand/orpartorallofoneormoreprocessesdisclosedherein.

Someembodimentsofthepresentdisclosureincludeasystemincludingoneormoredataprocessors.Insomeembodiments,thesystemincludesanon-transitorycomputerreadablestoragemediumcontaininginstructionswhich,whenexecutedontheoneormoredataprocessors,causetheoneormoredataprocessorstoperformpartorallofoneormoremethodsand/orpartorallofoneormoreprocessesdisclosedherein.

Someembodimentsofthepresentdisclosureincludeacomputer-programproducttangiblyembodiedinanon-transitorymachine-readablestoragemedium,includinginstructionsconfiguredtocauseoneormoredataprocessorstoperformpartorallofoneormoremethodsand/orpartorallofoneormoreprocessesdisclosedherein.

BRIEFDESCRIPTIONOFTHEDRAWINGS

FIG.1isasimplifiedblockdiagramofanetworkenvironmentinaccordancewithvariousembodiments.

FIG.2Aanillustrationdepictinganexampleextendedrealitysystemthatpresentsandcontrolsuserinterfaceelementswithinanextendedrealityenvironmentinaccordancewithvariousembodiments.

FIG.2Banillustrationdepictinguserinterfaceelementsinaccordancewithvariousembodiments.

FIG.3Aisanillustrationofanaugmentedrealitysysteminaccordancewithvariousembodiments.

FIG.3Bisanillustrationofavirtualrealitysysteminaccordancewithvariousembodiments.

FIG.4Aisanillustrationofhapticdevicesinaccordancewithvariousembodiments.

FIG.4Bisanillustrationofanexemplaryvirtualrealityenvironmentinaccordancewithvariousembodiments.

FIG.4Cisanillustrationofanexemplaryaugmentedrealityenvironmentinaccordancewithvariousembodiments.

FIG.5isasimplifiedblockdiagramofasocialcommunicationplatforminaccordancewithvariousembodiments.

FIG.6Aisasimplifiedblockdiagramillustratingasocialcommunicationsystemforconvertinginputdatatohapticoutputusingalexiconofemojisinaccordancewithvariousembodiments.

FIG.6Bisanillustrationofdigitalassetsforalexiconofemojisinaccordancewithvariousembodiments.

FIG.6Cisanillustrationofdigitalassetsforalexiconofemojisinaccordancewithvariousembodiments.

FIG.7isaflowchartillustratingaprocessforconvertinginputdatatohapticoutputusingalexiconofemojisinaccordancewithvariousembodiments.

FIG.8isasimplifiedblockdiagramillustratingamachine-learningpredictionsysteminaccordancewithvariousembodiments.

FIG.9isaflowchartillustratingaprocesstopredicthapticemojisforconveyingatouchmessageinaccordancewithvariousembodiments.

FIG.10isasimplifiedblockdiagramillustratingasocialcommunicationsystemforsupplementingahapticsignalwithadditionalinformationtofacilitateauserlearningahapticoutputinaccordancewithvariousembodiments.

FIG.11isaflowchartillustratingaprocessforsupplementingahapticsignalwithadditionalinformationtofacilitateauserlearningahapticoutputinaccordancewithvariousembodiments.

FIG.12isasimplifiedblockdiagramillustratingasignalgeneratorforoperatingcutaneousactuatorstodeliverhapticoutput(tactilefeedback)toauserinaccordancewithvariousembodiments.

FIG.13isaflowchartillustratingaprocessforgeneratingahapticoutputinaccordancewithvariousembodiments.

DETAILEDDESCRIPTION

Inthefollowingdescription,forthepurposesofexplanation,specificdetailsaresetforthinordertoprovideathoroughunderstandingofcertainembodiments.However,itwillbeapparentthatvariousembodimentsmaybepracticedwithoutthesespecificdetails.Thefiguresanddescriptionarenotintendedtoberestrictive.Theword“exemplary”isusedhereintomean“servingasanexample,instance,orillustration.”Anyembodimentordesigndescribedhereinas“exemplary”isnotnecessarilytobeconstruedaspreferredoradvantageousoverotherembodimentsordesigns.

INTRODUCTION

Inanotherexemplaryembodiment,anextendedrealitysystemisprovidedcomprising:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors;andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser;predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata;andtransmittingthehapticsignalordigitalassetsforthehapticemojitoadeviceofaseconduser.

Advantageously,thetactilemessagesaremoreexpressivethanvisualoraudiobasedmessages,andareparticularlyusefulwhenausercan'tvieworlistentovisualoraudiobasedmessages.

ExtendedRealitySystemOverview

Thisdisclosurecontemplatesanysuitablenetwork120.Asanexampleandnotbywayoflimitation,oneormoreportionsofanetwork120mayincludeanadhocnetwork,anintranet,anextranet,avirtualprivatenetwork(VPN),alocalareanetwork(LAN),awirelessLAN(WLAN),awideareanetwork(WAN),awirelessWAN(WWAN),ametropolitanareanetwork(MAN),aportionoftheInternet,aportionofthePublicSwitchedTelephoneNetwork(PSTN),acellulartelephonenetwork,oracombinationoftwoormoreofthese.Anetwork120mayincludeoneormorenetworks120.

Links125mayconnectaclientsystem105,avirtualassistantengine110,andaremotesystem115toacommunicationnetwork110ortoeachother.Thisdisclosurecontemplatesanysuitablelinks125.Inparticularembodiments,oneormorelinks125includeoneormorewireline(suchasforexampleDigitalSubscriberLine(DSL)orDataOverCableServiceInterfaceSpecification(DOCSIS)),wireless(suchasforexampleWi-FiorWorldwideInteroperabilityforMicrowaveAccess(WiMAX)),oroptical(suchasforexampleSynchronousOpticalNetwork(SONET)orSynchronousDigitalHierarchy(SDH))links.Inparticularembodiments,oneormorelinks125eachincludeanadhocnetwork,anintranet,anextranet,aVPN,aLAN,aWLAN,aWAN,aWWAN,aMAN,aportionoftheInternet,aportionofthePSTN,acellulartechnology-basednetwork,asatellitecommunicationstechnology-basednetwork,anotherlink125,oracombinationoftwoormoresuchlinks125.Links125neednotnecessarilybethesamethroughoutanetworkenvironment100.Oneormorefirstlinks125maydifferinoneormorerespectsfromoneormoresecondlinks125.

Invariousembodiments,aclientsystem105isanelectronicdeviceincludinghardware,software,orembeddedlogiccomponentsoracombinationoftwoormoresuchcomponentsandcapableofcarryingouttheappropriateextendedrealityfunctionalitiesinaccordancewithtechniquesofthedisclosure.Asanexample,andnotbywayoflimitation,aclientsystem105mayincludeadesktopcomputer,notebookorlaptopcomputer,netbook,atabletcomputer,e-bookreader,GPSdevice,camera,personaldigitalassistant(PDA),handheldelectronicdevice,cellulartelephone,smartphone,aVR.MR,AR,orVRheadsetsuchasanAR/VRHMD,othersuitableelectronicdevicecapableofdisplayingextendedrealitycontent,oranysuitablecombinationthereof.Inparticularembodiments,theclientsystem105isanAR/VRHMDasdescribedindetailwithrespecttoFIG.2.Thisdisclosurecontemplatesanysuitableclientsystem105configuredtogenerateandoutputextendedrealitycontenttotheuser.Theclientsystem105mayenableitsusertocommunicatewithotherusersatotherclientsystems105.

Auserattheclientsystem105mayusethevirtualassistantapplication130tointeractwiththevirtualassistantengine110.Insomeinstances,thevirtualassistantapplication130isastand-aloneapplicationorintegratedintoanotherapplicationsuchasasocial-networkingapplicationoranothersuitableapplication(e.g.,anartificialsimulationapplication).Insomeinstances,thevirtualassistantapplication130isintegratedintotheclientsystem105(e.g.,partoftheoperatingsystemoftheclientsystem105),anassistanthardwaredevice,oranyothersuitablehardwaredevices.Insomeinstances,thevirtualassistantapplication130maybeaccessedviaawebbrowser135.Insomeinstances,thevirtualassistantapplication130passivelylistenstoandwatchesinteractionsoftheuserinthereal-world,andprocesseswhatithearsandsees(e.g.,explicitinputsuchasaudiocommandsorinterfacecommands,contextualawarenessderivedfromaudioorphysicalactionsoftheuser,objectsinthereal-world,environmentaltriggerssuchasweatherortime,andthelike)inordertointeractwiththeuserinanintuitivemanner.

Invariousembodiments,aremotesystem115mayincludeoneormoretypesofservers,oneormoredatastores,oneormoreinterfaces,includingbutnotlimitedtoAPIs,oneormorewebservices,oneormorecontentsources,oneormorenetworks,oranyothersuitablecomponents,e.g.,thatserversmaycommunicatewith.Aremotesystem115maybeoperatedbyasameentityoradifferententityfromanentityoperatingthevirtualassistantengine110.Inparticularembodiments,however,thevirtualassistantengine110andthird-partysystems115mayoperateinconjunctionwitheachothertoprovidevirtualcontenttousersoftheclientsystem105.Forexample,asocial-networkingsystem145mayprovideaplatform,orbackbone,whichothersystems,suchasthird-partysystems,mayusetoprovidesocial-networkingservicesandfunctionalitytousersacrosstheInternet,andthevirtualassistantengine110mayaccessthesesystemstoprovidevirtualcontentontheclientsystem105.

Theremotesystem115mayincludeacontentobjectprovider150.Acontentobjectprovider150includesoneormoresourcesofvirtualcontentobjects,whichmaybecommunicatedtotheclientsystem105.Asanexample,andnotbywayoflimitation,virtualcontentobjectsmayincludeinformationregardingthingsoractivitiesofinteresttotheuser,suchas,forexample,movieshowtimes,moviereviews,restaurantreviews,restaurantmenus,productinformationandreviews,instructionsonhowtoperformvarioustasks,exerciseregimens,cookingrecipes,orothersuitableinformation.Asanotherexampleandnotbywayoflimitation,contentobjectsmayincludeincentivecontentobjects,suchascoupons,discounttickets,giftcertificates,orothersuitableincentiveobjects.Asanotherexampleandnotbywayoflimitation,contentobjectsmayincludevirtualobjectssuchasvirtualinterfaces,2Dor3Dgraphics,mediacontent,orothersuitablevirtualobjects.

IntheexampleshowninFIG.2A,virtualinformationorobjects240,245aremappedatapositionrelativetoaphysicalobject235.Asshouldbeunderstood,thevirtualimagery(e.g.,virtualcontentsuchasinformationorobjects240,245andvirtualuserinterface250)doesnotexistinthereal-world,physicalenvironment.Virtualuserinterface250maybefixed,asrelativetotheuser220,theuser'shand230,physicalobjects235,orothervirtualcontentsuchasvirtualinformationorobjects240,245,forinstance.Asaresult,clientsystem200renders,atauserinterfacepositionthatislockedrelativetoapositionoftheuser220,theuser'shand230,physicalobjects235,orothervirtualcontentintheextendedrealityenvironment,virtualuserinterface250fordisplayatextendedrealitysystem205aspartofextendedrealitycontent225.Asusedherein,avirtualelement‘locked’toapositionofvirtualcontentorphysicalobjectisrenderedatapositionrelativetothepositionofthevirtualcontentorphysicalobjectsoastoappeartobepartoforotherwisetiedintheextendedrealityenvironmenttothevirtualcontentorphysicalobject.

Clientsystem200maytriggergenerationandrenderingofvirtualcontentbasedonacurrentfieldofviewofuser220,asmaybedeterminedbyreal-timegaze255trackingoftheuser,orotherconditions.Morespecifically,imagecapturedevicesofthesensors215captureimagedatarepresentativeofobjectsintherealworld,physicalenvironmentthatarewithinafieldofviewofimagecapturedevices.Duringoperation,theclientsystem200performsobjectrecognitionwithinimagedatacapturedbytheimagecapturedevicesofextendedrealitysystem205toidentifyobjectsinthephysicalenvironmentsuchastheuser220,theuser'shand230,and/orphysicalobjects235.Further,theclientsystem200trackstheposition,orientation,andconfigurationoftheobjectsinthephysicalenvironmentoveraslidingwindowoftime.Fieldofviewtypicallycorrespondswiththeviewingperspectiveoftheextendedrealitysystem205.Insomeexamples,theextendedrealityapplicationpresentsextendedrealitycontent225comprisingmixedrealityand/oraugmentedreality.

Variousembodimentsdisclosedhereinmayincludeorbeimplementedinconjunctionwithvarioustypesofextendedrealitysystems.Extendedrealitycontentgeneratedbytheextendedrealitysystemsmayincludecompletelycomputer-generatedcontentorcomputer-generatedcontentcombinedwithcaptured(e.g.,real-world)content.Theextendedrealitycontentmayincludevideo,audio,hapticfeedback,orsomecombinationthereof,anyofwhichmaybepresentedinasinglechannelorinmultiplechannels(suchasstereovideothatproducesathree-dimensional(3D)effecttotheviewer).Additionally,insomeembodiments,extendedrealitymayalsobeassociatedwithapplications,products,accessories,services,orsomecombinationthereof,thatareusedto,forexample,createcontentinanextendedrealityand/orareotherwiseusedin(e.g.,toperformactivitiesin)anextendedreality.

Theextendedrealitysystemsmaybeimplementedinavarietyofdifferentformfactorsandconfigurations.Someextendedrealitysystemsmaybedesignedtoworkwithoutnear-eyedisplays(NEDs).OtherextendedrealitysystemsmayincludeanNEDthatalsoprovidesvisibilityintotherealworld(suchas,e.g.,augmentedrealitysystem300inFIG.3A)orthatvisuallyimmersesauserinanextendedreality(suchas,e.g.,virtualrealitysystem350inFIG.3B).Whilesomeextendedrealitydevicesmaybeself-containedsystems,otherextendedrealitydevicesmaycommunicateand/orcoordinatewithexternaldevicestoprovideanextendedrealityexperiencetoauser.Examplesofsuchexternaldevicesincludehandheldcontrollers,mobiledevices,desktopcomputers,deviceswornbyauser,deviceswornbyoneormoreotherusers,and/oranyothersuitableexternalsystem.

AsshowninFIG.3A,augmentedrealitysystem300mayincludeaneyeweardevice305withaframe310configuredtoholdaleftdisplaydevice315(A)andarightdisplaydevice315(B)infrontofauser'seyes.Displaydevices315(A)and315(B)mayacttogetherorindependentlytopresentanimageorseriesofimagestoauser.Whileaugmentedrealitysystem300includestwodisplays,embodimentsofthisdisclosuremaybeimplementedinaugmentedrealitysystemswithasingleNEDormorethantwoNEDs.

Insomeembodiments,augmentedrealitysystem300mayincludeoneormoresensors,suchassensor320.Sensor320maygeneratemeasurementsignalsinresponsetomotionofaugmentedrealitysystem300andmaybelocatedonsubstantiallyanyportionofframe310.Sensor320mayrepresentoneormoreofavarietyofdifferentsensingmechanisms,suchasapositionsensor,aninertialmeasurementunit(IMU),adepthcameraassembly,astructuredlightemitterand/ordetector,oranycombinationthereof.Insomeembodiments,augmentedrealitysystem300mayormaynotincludesensor320ormayincludemorethanonesensor.Inembodimentsinwhichsensor320includesanIMU,theIMUmaygeneratecalibrationdatabasedonmeasurementsignalsfromsensor320.Examplesofsensor320mayinclude,withoutlimitation,accelerometers,gyroscopes,magnetometers,othersuitabletypesofsensorsthatdetectmotion,sensorsusedforerrorcorrectionoftheIMU,orsomecombinationthereof.

Insomeexamples,augmentedrealitysystem300mayalsoincludeamicrophonearraywithapluralityofacoustictransducers325(A)-325(J),referredtocollectivelyasacoustictransducers325.Acoustictransducers325mayrepresenttransducersthatdetectairpressurevariationsinducedbysoundwaves.Eachacoustictransducer325maybeconfiguredtodetectsoundandconvertthedetectedsoundintoanelectronicformat(e.g.,ananalogordigitalformat).ThemicrophonearrayinFIG.3Amayinclude,forexample,tenacoustictransducers:325(A)and325(B),whichmaybedesignedtobeplacedinsideacorrespondingearoftheuser,acoustictransducers325(C),325(D),325(E),325(F),325(G),and325(H),whichmaybepositionedatvariouslocationsonframe310,and/oracoustictransducers325(I)and325(J),whichmaybepositionedonacorrespondingneckband330.

Insomeembodiments,oneormoreofacoustictransducers325(A)-(J)maybeusedasoutputtransducers(e.g.,speakers).Forexample,acoustictransducers325(A)and/or325(B)maybeearbudsoranyothersuitabletypeofheadphoneorspeaker.Theconfigurationofacoustictransducers325ofthemicrophonearraymayvary.Whileaugmentedrealitysystem300isshowninFIG.3ashavingtenacoustictransducers325,thenumberofacoustictransducers325maybegreaterorlessthanten.Insomeembodiments,usinghighernumbersofacoustictransducers325mayincreasetheamountofaudioinformationcollectedand/orthesensitivityandaccuracyoftheaudioinformation.Incontrast,usingalowernumberofacoustictransducers325maydecreasethecomputingpowerrequiredbyanassociatedcontroller335toprocessthecollectedaudioinformation.Inaddition,thepositionofeachacoustictransducer325ofthemicrophonearraymayvary.Forexample,thepositionofanacoustictransducer325mayincludeadefinedpositionontheuser,adefinedcoordinateonframe310,anorientationassociatedwitheachacoustictransducer325,orsomecombinationthereof.

Acoustictransducers325(A)and325(B)maybepositionedondifferentpartsoftheuser'sear,suchasbehindthepinna,behindthetragus,and/orwithintheauricleorfossa.Or,theremaybeadditionalacoustictransducers325onorsurroundingtheearinadditiontoacoustictransducers325insidetheearcanal.Havinganacoustictransducer325positionednexttoanearcanalofausermayenablethemicrophonearraytocollectinformationonhowsoundsarriveattheearcanal.Bypositioningatleasttwoofacoustictransducers325oneithersideofauser'shead(e.g.,asbinauralmicrophones),augmentedrealitysystem300maysimulatebinauralhearingandcapturea3Dstereosoundfieldaroundaboutauser'shead.Insomeembodiments,acoustictransducers325(A)and325(B)maybeconnectedtoaugmentedrealitysystem300viaawiredconnection340,andinotherembodimentsacoustictransducers325(A)and325(B)maybeconnectedtoaugmentedrealitysystem300viaawirelessconnection(e.g.,aBluetoothconnection).Instillotherembodiments,acoustictransducers325(A)and325(B)maynotbeusedatallinconjunctionwithaugmentedrealitysystem300.

Acoustictransducers325onframe310maybepositionedinavarietyofdifferentways,includingalongthelengthofthetemples,acrossthebridge,aboveorbelowdisplaydevices315(A)and315(B),orsomecombinationthereof.Acoustictransducers325mayalsobeorientedsuchthatthemicrophonearrayisabletodetectsoundsinawiderangeofdirectionssurroundingtheuserwearingtheaugmentedrealitysystem300.Insomeembodiments,anoptimizationprocessmaybeperformedduringmanufacturingofaugmentedrealitysystem300todeterminerelativepositioningofeachacoustictransducer325inthemicrophonearray.

Insomeexamples,augmentedrealitysystem300mayincludeorbeconnectedtoanexternaldevice(e.g.,apaireddevice),suchasneckband330.Neckband330generallyrepresentsanytypeorformofpaireddevice.Thus,thefollowingdiscussionofneckband330mayalsoapplytovariousotherpaireddevices,suchaschargingcases,smartwatches,smartphones,wristbands,otherwearabledevices,hand-heldcontrollers,tabletcomputers,laptopcomputers,otherexternalcomputedevices,etc.

Asshown,neckband330maybecoupledtoeyeweardevice305viaoneormoreconnectors.Theconnectorsmaybewiredorwirelessandmayincludeelectricaland/ornon-electrical(e.g.,structural)components.Insomecases,eyeweardevice305andneckband330mayoperateindependentlywithoutanywiredorwirelessconnectionbetweenthem.WhileFIG.3Aillustratesthecomponentsofeyeweardevice305andneckband330inexamplelocationsoneyeweardevice305andneckband330,thecomponentsmaybelocatedelsewhereand/ordistributeddifferentlyoneyeweardevice305and/orneckband330.Insomeembodiments,thecomponentsofeyeweardevice305andneckband330maybelocatedononeormoreadditionalperipheraldevicespairedwitheyeweardevice305,neckband330,orsomecombinationthereof.

Neckband330maybecommunicativelycoupledwitheyeweardevice305and/ortootherdevices.Theseotherdevicesmayprovidecertainfunctions(e.g.,tracking,localizing,depthmapping,processing,storage,etc.)toaugmentedrealitysystem300.IntheembodimentofFIG.3A,neckband330mayincludetwoacoustictransducers(e.g.,325(I)and325(J))thatarepartofthemicrophonearray(orpotentiallyformtheirownmicrophonesubarray).Neckband330mayalsoincludeacontroller342andapowersource345.

Acoustictransducers325(I)and325(J)ofneckband330maybeconfiguredtodetectsoundandconvertthedetectedsoundintoanelectronicformat(analogordigital).IntheembodimentofFIG.3A,acoustictransducers325(I)and325(J)maybepositionedonneckband330,therebyincreasingthedistancebetweentheneckbandacoustictransducers325(I)and325(J)andotheracoustictransducers325positionedoneyeweardevice305.Insomecases,increasingthedistancebetweenacoustictransducers325ofthemicrophonearraymayimprovetheaccuracyofbeamformingperformedviathemicrophonearray.Forexample,ifasoundisdetectedbyacoustictransducers325(C)and325(D)andthedistancebetweenacoustictransducers325(C)and325(D)isgreaterthan,e.g.,thedistancebetweenacoustictransducers325(D)and325(E),thedeterminedsourcelocationofthedetectedsoundmaybemoreaccuratethanifthesoundhadbeendetectedbyacoustictransducers325(D)and325(E).

Powersource345inneckband330mayprovidepowertoeyeweardevice305and/ortoneckband330.Powersource345mayinclude,withoutlimitation,lithium-ionbatteries,lithium-polymerbatteries,primarylithiumbatteries,alkalinebatteries,oranyotherformofpowerstorage.Insomecases,powersource345maybeawiredpowersource.Includingpowersource345onneckband330insteadofoneyeweardevice305mayhelpbetterdistributetheweightandheatgeneratedbypowersource345.

Asnoted,someextendedrealitysystemsmay,insteadofblendinganextendedrealitywithactualreality,substantiallyreplaceoneormoreofauser'ssensoryperceptionsoftherealworldwithavirtualexperience.Oneexampleofthistypeofsystemisahead-worndisplaysystem,suchasvirtualrealitysystem350inFIG.3B,thatmostlyorcompletelycoversauser'sfieldofview.Virtualrealitysystem350mayincludeafrontrigidbody355andaband360shapedtofitaroundauser'shead.Virtualrealitysystem1700mayalsoincludeoutputaudiotransducers365(A)and365(B).Furthermore,whilenotshowninFIG.3B,frontrigidbody355mayincludeoneormoreelectronicelements,includingoneormoreelectronicdisplays,oneormoreinertialmeasurementunits(IMUs),oneormoretrackingemittersordetectors,and/oranyothersuitabledeviceorsystemforcreatinganextendedrealityexperience.

Inadditiontoorinsteadofusingdisplayscreens,someoftheextendedrealitysystemsdescribedhereinmayincludeoneormoreprojectionsystems.Forexample,displaydevicesinaugmentedrealitysystem300and/orvirtualrealitysystem350mayincludemicro-LEDprojectorsthatprojectlight(using,e.g.,awaveguide)intodisplaydevices,suchasclearcombinerlensesthatallowambientlighttopassthrough.Thedisplaydevicesmayrefracttheprojectedlighttowardauser'spupilandmayenableausertosimultaneouslyviewbothextendedrealitycontentandtherealworld.Thedisplaydevicesmayaccomplishthisusinganyofavarietyofdifferentopticalcomponents,includingwaveguidecomponents(e.g.,holographic,planar,diffractive,polarized,and/orreflectivewaveguideelements),light-manipulationsurfacesandelements(suchasdiffractive,reflective,andrefractiveelementsandgratings),couplingelements,etc.Extendedrealitysystemsmayalsobeconfiguredwithanyothersuitabletypeorformofimageprojectionsystem,suchasretinalprojectorsusedinvirtualretinadisplays.

Theextendedrealitysystemsdescribedhereinmayalsoincludevarioustypesofcomputervisioncomponentsandsubsystems.Forexample,augmentedrealitysystem300and/orvirtualrealitysystem350mayincludeoneormoreopticalsensors,suchastwo-dimensional(2D)or3Dcameras,structuredlighttransmittersanddetectors,time-of-flightdepthsensors,single-beamorsweepinglaserrangefinders,3DLiDARsensors,and/oranyothersuitabletypeorformofopticalsensor.Anextendedrealitysystemmayprocessdatafromoneormoreofthesesensorstoidentifyalocationofauser,tomaptherealworld,toprovideauserwithcontextaboutreal-worldsurroundings,and/ortoperformavarietyofotherfunctions.

Theextendedrealitysystemsdescribedhereinmayalsoincludeoneormoreinputand/oroutputaudiotransducers.Outputaudiotransducersmayincludevoicecoilspeakers,ribbonspeakers,electrostaticspeakers,piezoelectricspeakers,boneconductiontransducers,cartilageconductiontransducers,tragus-vibrationtransducers,and/oranyothersuitabletypeorformofaudiotransducer.Similarly,inputaudiotransducersmayincludecondensermicrophones,dynamicmicrophones,ribbonmicrophones,and/oranyothertypeorformofinputtransducer.Insomeembodiments,asingletransducermaybeusedforbothaudioinputandaudiooutput.

Insomeembodiments,theextendedrealitysystemsdescribedhereinmayalsoincludetactile(e.g.,haptic)feedbacksystems,whichmaybeincorporatedintoheadwear,gloves,bodysuits,handheldcontrollers,environmentaldevices(e.g.,chairs,floormats,etc.),and/oranyothertypeofdeviceorsystem.Hapticfeedbacksystemsmayprovidevarioustypesofcutaneousfeedback,includingvibration,force,traction,texture,and/ortemperature.Hapticfeedbacksystemsmayalsoprovidevarioustypesofkinestheticfeedback,suchasmotionandcompliance.Hapticfeedbackmaybeimplementedusingmotors,piezoelectricactuators,fluidicsystems,and/oravarietyofothertypesoffeedbackmechanisms.Hapticfeedbacksystemsmaybeimplementedindependentofotherextendedrealitydevices,withinotherextendedrealitydevices,and/orinconjunctionwithotherextendedrealitydevices.

Byprovidinghapticsensations,audiblecontent,and/orvisualcontent,extendedrealitysystemsmaycreateanentirevirtualexperienceorenhanceauser'sreal-worldexperienceinavarietyofcontextsandenvironments.Forinstance,extendedrealitysystemsmayassistorextendauser'sperception,memory,orcognitionwithinaparticularenvironment.Somesystemsmayenhanceauser'sinteractionswithotherpeopleintherealworldormayenablemoreimmersiveinteractionswithotherpeopleinavirtualworld.Extendedrealitysystemsmayalsobeusedforeducationalpurposes(e.g.,forteachingortraininginschools,hospitals,governmentorganizations,militaryorganizations,businessenterprises,etc.),entertainmentpurposes(e.g.,forplayingvideogames,listeningtomusic,watchingvideocontent,etc.),and/orforaccessibilitypurposes(e.g.,ashearingaids,visualaids,etc.).Theembodimentsdisclosedhereinmayenableorenhanceauser'sextendedrealityexperienceinoneormoreofthesecontextsandenvironmentsand/orinothercontextsandenvironments.

Asnoted,extendedrealitysystems300and350maybeusedwithavarietyofothertypesofdevicestoprovideamorecompellingextendedrealityexperience.Thesedevicesmaybehapticinterfaceswithtransducersthatprovidehapticfeedbackand/orthatcollecthapticinformationaboutauser'sinteractionwithanenvironment.Theextendedrealitysystemsdisclosedhereinmayincludevarioustypesofhapticinterfacesthatdetectorconveyvarioustypesofhapticinformation,includingtactilefeedback(e.g.,feedbackthatauserdetectsvianervesintheskin,whichmayalsobereferredtoascutaneousfeedback)and/orkinestheticfeedback(e.g.,feedbackthatauserdetectsviareceptorslocatedinmuscles,joints,and/ortendons).

Oneormorevibrotactiledevices420maybepositionedatleastpartiallywithinoneormorecorrespondingpocketsformedintextilematerial415ofvibrotactilesystem400.Vibrotactiledevices420maybepositionedinlocationstoprovideavibratingsensation(e.g.,hapticfeedback)toauserofvibrotactilesystem400.Forexample,vibrotactiledevices420maybepositionedagainsttheuser'sfinger(s),thumb,orwrist,asshowninFIG.4A.Vibrotactiledevices420may,insomeexamples,besufficientlyflexibletoconformtoorbendwiththeuser'scorrespondingbodypart(s).

Apowersource425(e.g.,abattery)forapplyingavoltagetothevibrotactiledevices420foractivationthereofmaybeelectricallycoupledtovibrotactiledevices420,suchasviaconductivewiring430.Insomeexamples,eachofvibrotactiledevices420maybeindependentlyelectricallycoupledtopowersource425forindividualactivation.Insomeembodiments,aprocessor435maybeoperativelycoupledtopowersource425andconfigured(e.g.,programmed)tocontrolactivationofvibrotactiledevices420.

Vibrotactilesystem400mayoptionallyincludeothersubsystemsandcomponents,suchastouch-sensitivepads450,pressuresensors,motionsensors,positionsensors,lightingelements,and/oruserinterfaceelements(e.g.,anon/offbutton,avibrationcontrolelement,etc.).Duringuse,vibrotactiledevices420maybeconfiguredtobeactivatedforavarietyofdifferentreasons,suchasinresponsetotheuser'sinteractionwithuserinterfaceelements,asignalfromthemotionorpositionsensors,asignalfromthetouch-sensitivepads450,asignalfromthepressuresensors,asignalfromtheotherdeviceorsystem440,etc.

Althoughpowersource425,processor435,andcommunicationsinterface445areillustratedinFIG.4Aasbeingpositionedinhapticdevice410,thepresentdisclosureisnotsolimited.Forexample,oneormoreofpowersource425,processor435,orcommunicationsinterface445maybepositionedwithinhapticdevice405orwithinanotherwearabletextile.

Hapticwearables,suchasthoseshowninanddescribedinconnectionwithFIG.4A,maybeimplementedinavarietyoftypesofextendedrealitysystemsandenvironments.FIG.4Bshowsanexampleextendedrealityenvironment460includingonehead-mountedvirtualrealitydisplayandtwohapticdevices(e.g.,gloves),andinotherembodimentsanynumberand/orcombinationofthesecomponentsandothercomponentsmaybeincludedinanextendedrealitysystem.Forexample,insomeembodimentstheremaybemultiplehead-mounteddisplayseachhavinganassociatedhapticdevice,witheachhead-mounteddisplayandeachhapticdevicecommunicatingwiththesameconsole,portablecomputingdevice,orothercomputingsystem.

Whilehapticinterfacesmaybeusedwithvirtualrealitysystems,asshowninFIG.4B,hapticinterfacesmayalsobeusedwithaugmentedrealitysystems,asshowninFIG.4C.FIG.4Cisaperspectiveviewofauser475interactingwithanaugmentedrealitysystem480.Inthisexample,user475maywearapairofaugmentedrealityglasses485thatmayhaveoneormoredisplays487andthatarepairedwithahapticdevice490.Inthisexample,hapticdevice490maybeawristbandthatincludesapluralityofbandelements492andatensioningmechanism495thatconnectsbandelements492tooneanother.

Oneormoreofbandelements492mayincludeanytypeorformofactuatorsuitableforprovidinghapticfeedback.Forexample,oneormoreofbandelements492maybeconfiguredtoprovideoneormoreofvarioustypesofcutaneousfeedback,includingvibration,force,traction,texture,and/ortemperature.Toprovidesuchfeedback,bandelements492mayincludeoneormoreofvarioustypesofactuators.Inoneexample,eachofbandelements492mayincludeavibrotactor(e.g.,avibrotactileactuator)configuredtovibrateinunisonorindependentlytoprovideoneormoreofvarioustypesofhapticsensationstoauser.Alternatively,onlyasinglebandelementorasubsetofbandelementsmayincludevibrotactors.

Hapticdevices405,410,470,and490mayincludeanysuitablenumberand/ortypeofhaptictransducer,sensor,and/orfeedbackmechanism.Forexample,hapticdevices405,410,470,and490mayincludeoneormoremechanicaltransducers,piezoelectrictransducers,and/orfluidictransducers.Hapticdevices405,410,470,and490mayalsoincludevariouscombinationsofdifferenttypesandformsoftransducersthatworktogetherorindependentlytoenhanceauser'sextendedrealityexperience.Inoneexample,eachofbandelements492ofhapticdevice490mayincludeavibrotactor(e.g.,avibrotactileactuator)configuredtovibrateinunisonorindependentlytoprovideoneormoreofvarioustypesofhapticsensationstoauser.

Insomeembodiments,thedata525obtainedviatheclientsystem505isassociatedwithoneormoreprivacysettings.Thedata525maybestoredonorotherwiseassociatedwithanysuitablecomputingsystemorapplication,suchas,forexample,asocial-networkingsystem,aclientsystem,athird-partysystem,amessagingapplication,aphoto-sharingapplication,abiometricdataacquisitionapplication,anartificial-realityapplication,avirtualassistantapplication,and/oranyothersuitablecomputingsystemorapplication.

Insomeembodiments,privacysettingsforthedata525mayspecifya“blockedlist”ofusersorotherentitiesthatshouldnotbeallowedtoaccesscertaininformationassociatedwiththedata525.Insomecases,theblockedlistmayincludethird-partyentities.Theblockedlistmayspecifyoneormoreusersorentitiesforwhichthedata525isnotvisible.

Privacysettingsassociatedwiththedata525mayspecifyanysuitablegranularityofpermittedaccessordenialofaccess.Asanexample,accessordenialofaccessmaybespecifiedforparticularusers(e.g.,onlyme,myroommates,myboss),userswithinaparticulardegree-of-separation(e.g.,friends,friends-of-friends),usergroups(e.g.,thegamingclub,myfamily),usernetworks(e.g.,employeesofparticularemployers,studentsoralumniofparticularuniversity),allusers(“public”),nousers(“private”),usersofthird-partysystems,particularapplications(e.g.,third-partyapplications,externalwebsites),othersuitableentities,oranysuitablecombinationthereof.Insomeembodiments,differentpiecesofthedata525ofthesametypeassociatedwithausermayhavedifferentprivacysettings.Inaddition,oneormoredefaultprivacysettingsmaybesetforeachpieceofdata525ofaparticulardata-type.

Althoughthesocialcommunicationplatform500isdescribedwithregardtogeneratingthehapticsignal535attheclientsystem505(a)ofthesendinguser,itshouldbeunderstoodthatthehapticsignal535canalternativelybegeneratedattheclientsystem505(b)ofthereceivinguseroracompletelydifferentremotesystem(e.g.,adistributedsocialnetworkingsystem)usingsimilarcomponentsandtechniquesdescribedherein.Moreover,thesocialcommunicationplatform500illustratesaone-wayhapticcommunicationwherethesendingusersendsahapticsignaltothereceivinguser,howeveritshouldbeunderstoodthatthehapticcommunicationcanbebidirectionalandtheclientsystem505(b)ofthereceivingusercouldhavesimilarcomponentsasdescribedwithrespecttotheclientsystem505(a)ofthesendinguserandlikewisetheclientsystem505(a)ofthesendingusercouldhavesimilarcomponentsasdescribedwithrespecttotheclientsystem505(b)ofthereceivinguser.Further,asendingusercanbroadcastthehapticsignalvianetwork540toapluralityofclientsystems505(b-n)associatedwithreceivingusersinsteadofasinglereceivinguser.

TouchCommunicationTechniques

TouchCommunicationUsingaLexiconofEmojis

FIG.6Aisablockdiagramillustratingcomponentsofasocialcommunicationsystem600forconvertinginputdata605tohapticoutput610usingalexiconofemojis615inaccordancewithvariousembodiments.Togeneratethehapticoutput610,inputdata605fromafirstuser(sendinguser)isprocessedbyanalgorithmusingthelexiconofemojis615toobtainacorrespondinghapticsignalthatistransmittedtoaseconduser(receivinguser)tooperatethehapticfeedbackdevice.Thehapticfeedbackdevicereceivesthetransmittedhapticsignals,translatesthehapticsignalsintothehapticoutput610,andtransmitsthehapticoutput610correspondingtothereceivedhapticsignalstoabodyoftheseconduser.

Insomeinstances,thelexiconofemojis615maybekey-valuestore,orkey-valuedatabase,whichisatypeofdatastoragesoftwareprogramthatstoresdataasasetofuniqueidentifiers,eachofwhichhaveanassociatedvalue.Thisdatapairingisknownasa“key-valuepair.”Theuniqueidentifieristhe“key”foranitemofdata,andavalueiseitherthedatabeingidentifiedorthelocationofthatdata.Although,thelexiconofemojis615isdescribedhereinasakey-valuedatabaseitshouldbeunderstoodthatotherdatabasedesignscouldbeusedwithoutdepartingfromthespiritandscopeofthepresentdisclosure.Forexampleinotherinstances,thelexiconofemojis615isarelationaldatabase,wheredataisstoredintablescomposedofrowsandcolumns.Thedatabasedeveloperspecifiesattributesofthedata(i.e.,emojisandassetsthereof)tobestoredinthetableupfront.Thiscreatessignificantopportunitiesforoptimizationssuchasdatacompressionandperformancearoundaggregationsanddataaccess.Theattributesofthedatamaybequeriedinasimilarfashionaskeysinthekey-valuedatabasetoidentifyemojisassociatedwithsuchattributes.

Thelexiconofemojis615maycomprisesanynumberofemojis620(A-N).Eachoftheemojis620isconfiguredwithacorrespondingelectroniccommunicationthatincludesavisualcomponent(showninFIG.6Basthecharacterineachillustration),anaudiocomponent(showninFIG.6Bastheverbalutteranceineachillustration),ahapticcomponent(showninFIG.6Casthehapticsignalpatternineachillustration),oracombinationthereof.Emojiswithavisualcomponent(e.g.,apictogram,logogram,orideogram)areassociatedwithinthelexicontoanimageorvideoasset(e.g.,ajpeg,gif,mov,orjsonfile).Emojiswithanaudiocomponentareassociatedwithinthelexicontoanaudioasset(e.g.,awayormp3file).Emojiswithahapticcomponentareassociatedwithinthelexicontoahapticsignal(e.g.,parameterinformationoninterval,pitch,amplitude,oracombinationthereofforatouchmessagetobeperceivedbyareceivinguser'sbody),whichcanbeconvertedintohapticoutput615.

Thehapticsignalforeachemojimaybepre-generated.Insomeinstances,thehapticsignalisconfiguredwithparameterinformationforinterval,pitch,andamplitudetogeneratepatternsforthehapticoutput610thatmatchtheimageoranimationoftheemojiand/orthesoundeffectoftheemoji(i.e.,theimageoraudiocomponentsupplementtheunderstandingofthehapticcomponent).Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,aperceptualscientist)togeneratepatternsforthehapticoutput610thatbestcommunicatetheemotiontoauser(i.e.,thehapticcomponenthasahighlikelihoodofconveyingtheemotiontoauserwithouttheimageoraudiocomponent).Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,auseroftheHMDdevice)togeneratepatternsforthehapticoutput610thatcustomizetouchcommunicatetoauser(i.e.,thehapticcomponentiscustomizedforconveyingtheemotiontoauserwithorwithouttheimageoraudiocomponent).

Alexiconsignalconverter625convertstheinputdata605intohapticsignals610usingthelexiconofemojis615.Thelexiconsignalconverter620maybeacomponentinasignalgenerator(e.g.,signalgenerator555describedwithrespecttoFIG.5).Thelexiconsignalconverter620comprisesaninputdataprocessingmodule630,apatternrecognitionmodule635,andaqueryengine640.Theinputprocessingmodule625determinesthecharacteristicsoftheinputdata605received(e.g.,text,audio,imagesorvideo,sensordata,orthelike)usingtheinputdatamodule630,identifiesakeyorattributeswithintheinputdata605usingthepatternrecognitionmodule635,andcommunicatesthekeyorattributestothequeryengine640forsearchingthelexiconofemojis615toidentifyoneormoreemojisassociatedwithanelectroniccommunication.

FIG.7isaflowchartillustratingaprocess700forconvertinginputdatatohapticoutputusingalexiconofemojisaccordingtovariousembodiments.TheprocessingdepictedinFIG.7maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.7anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.7depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorderorsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,or6A-6C,theprocessingdepictedinFIG.7maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.

Atstep705,inputdataisobtainedfromaclientsystemofafirstuser(e.g.,capturedusingoneormoresensors).Insomeinstances,theoneormoresensorscaptureinputdataincludingimagesofavisualfieldofthefirstuserwearingahead-mounteddevicecomprisingadisplaytodisplaycontenttothefirstuser.Theinputdataincludes:(i)dataregardingactivityoftheuserinanextendedrealityenvironment(e.g.,imagesandaudiooftheuserinteractinginthephysicalenvironmentand/orthevirtualenvironment),(ii)datafromexternalsystems,or(iii)both.Insomeinstances,thedataregardingactivityoftheuserincludestext,audio,imagesorvideo,sensordata,orthelike.

Atstep710,featuresareextractedfromtheinputdatathatcorrespondtoanelectroniccommunication.Theextractingcomprisesdeterminingcharacteristicsoftheinputdataandidentifyingpatternswithintheinputdatathatcorrespondtoakeyorattributesofelectroniccommunicationbasedonthecharacteristics.Thekeyorattributesaretheextractedfeatures.

Atstep715,anemoji(e.g.,ahapticemoji)isidentifiedfromalexiconofemojisbasedontheextractedfeatures.Theidentifyingtheemojicomprisesconstructingaqueryusingtheextractedfeaturesasparametersofthequeryandexecutingthequeryonthelexiconofemojis.

Atstep720,digitalassetsareobtainedfortheemoji.Thedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.Insomeinstances,thedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.Thehapticsignalfortheemojimaybepre-generated.Insomeinstances,thehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.Insomeinstances,thehapticsignalisconfiguredwithparameterinformationforinterval,pitch,andamplitudetogeneratepatternsforthehapticoutputthatmatchtheimageoranimationoftheemojiand/orthesoundeffectoftheemoji.Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,thefirstuseroranotheruser)togeneratepatternsforthehapticoutputthatcommunicateanemotionviatouchcommunicationtotheseconduser.

Atstep725,thedigitalassetsaretransmittedtoadeviceofaseconduser.Insomeinstances,thedeviceisanotherhead-mounteddevice.Thedeviceisconfiguredtoconvertthehapticsignaltothehapticoutputbasedontheparameterinformationinordertoconveyatouchmessageasatleastpartoftheelectroniccommunicationtotheseconduserviaahapticdevice.Insomeinstances,thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji)thatisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedonthedigitalassets(e.g.,theimageorvideoasset,theaudioasset,orboth).

TouchCommunicationUsingAIBasedSystem

Apredictionmodel825canbeamachine-learningmodel,suchasaconvolutionalneuralnetwork(“CNN”),e.g.,aninceptionneuralnetwork,aresidualneuralnetwork(“Resnet”),orarecurrentneuralnetwork,e.g.,longshort-termmemory(“LSTM”)modelsorgatedrecurrentunits(“GRUs”)models,othervariantsofDeepNeuralNetworks(“DNN”)(e.g.,amulti-labeln-binaryDNNclassifierormulti-classDNNclassifier).Apredictionmodel125canalsobeanyothersuitableMLmodeltrainedforprovidingarecommendation,suchasaGenerativeadversarialnetwork(GAN),NaiveBayesClassifier,LinearClassifier,SupportVectorMachine,BaggingModelssuchasRandomForestModel,BoostingModels,ShallowNeuralNetworks,orcombinationsofoneormoreofsuchtechniques—e.g.,CNN-HMMorMCNN(Multi-ScaleConvolutionalNeuralNetwork).Themachine-learningpredictionsystem800mayemploythesametypeofpredictionmodelordifferenttypesofpredictionmodelsforpredictinghapticemojisforconveyingatouchmessage.Stillothertypesofpredictionmodelsmaybeimplementedinotherexamplesaccordingtothisdisclosure.

Totrainthevariouspredictionmodels825,thetrainingstage810iscomprisedoftwomaincomponents:datasetpreparationmodule830andmodeltrainingframework840.Thedatasetpreparationmodule830performstheprocessesofloadingdataassets845,splittingthedataassets845intotrainingandvalidationsets845a-nsothatthesystemcantrainandtestthepredictionmodels825,andpre-processingofdataassets845.Thesplittingthedataassets845intotrainingandvalidationsets845a-nmaybeperformedrandomly(e.g.,a90/10%or70/30%)orthesplittingmaybeperformedinaccordancewithamorecomplexvalidationtechniquesuchasK-FoldCross-Validation,Leave-one-outCross-Validation,Leave-one-group-outCross-Validation,NestedCross-Validation,ortheliketominimizesamplingbiasandoverfitting.

Themodeltrainingstage810outputstrainedmodelsincludingoneormoretrainedpredictionmodels860.Theoneormoretrainedpredictionmodels855maybedeployedandusedintheimplementationstage820topredictahapticemojiorhapticsignal865forconveyingatouchmessage.Forexample,predictionmodels860mayreceiveinputdata870(e.g.,agesturebyafirstuser)orcontextdata(e.g.,atextmessagereceivedbyaseconduser),andpredictahapticemojiorhapticsignalbasedonfeaturesandrelationshipsbetweenfeaturesextractedfromwithintheinputdata870.

FIG.9isaflowchartillustratingaprocess900topredicthapticemojisforconveyingatouchmessageaccordingtovariousembodiments.TheprocessingdepictedinFIG.9maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.9anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.9depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorderorsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,or8,theprocessingdepictedinFIG.9maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.

Atstep905,inputdataisobtainedfromaclientsystemofafirstuser(e.g.,capturedusingoneormoresensors).Insomeinstances,theoneormoresensorscaptureinputdataincludingimagesofavisualfieldofthefirstuserwearingahead-mounteddevicecomprisingadisplaytodisplaycontenttothefirstuser.Theinputdataincludes:(i)dataregardingactivityoftheuserinanextendedrealityenvironment(e.g.,imagesandaudiooftheuserinteractinginthephysicalenvironmentand/orthevirtualenvironment),(ii)datafromexternalsystems,or(iii)both.Insomeinstances,thedataregardingactivityoftheuserincludestext,audio,imagesorvideo,sensordata,orthelike.

Atstep910,predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdata(e.g.,agesturebyafirstuser)andcontextdata(e.g.,atextmessagereceivedbyaseconduser).

Atoptionalstep915(instancesofpredictingahapticemoji),digitalassetsareobtainedfortheemoji.Thedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.Insomeinstances,thedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.Thehapticsignalfortheemojimaybepre-generated.Insomeinstances,thehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.Insomeinstances,thehapticsignalisconfiguredwithparameterinformationforinterval,pitch,andamplitudetogeneratepatternsforthehapticoutputthatmatchtheimageoranimationoftheemojiand/orthesoundeffectoftheemoji.Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,thefirstuseroranotheruser)togeneratepatternsforthehapticoutputthatcommunicateanemotionviatouchcommunicationtotheseconduser.

Atstep920,thedigitalassetsorhapticsignalaretransmittedtoadeviceofaseconduser.Insomeinstances,thedeviceisanotherhead-mounteddevice.Thedeviceisconfiguredtoconvertthehapticsignaltothehapticoutputbasedontheparameterinformationinordertoconveyatouchmessageasatleastpartoftheelectroniccommunicationtotheseconduserviaahapticdevice.Insomeinstances,thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji)thatisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedonthedigitalassets(e.g.,theimageorvideoasset,theaudioasset,orboth).

LearningProgramtoFacilitateLearningoftheHapticOutput

Theinputdata1005maybetext,audio,imagesorvideo,sensordata,orthelike.Theadditionalinformation1030mayincludeatextdescriptionofthetouchcommunicationconveyedbythehapticsignal(e.g.,forawavehapticsignal,thetextcouldsay“sendinguser”waveshelloto“receivinguser”),anaudiocomponentcorrespondingtoahapticsignal(e.g.,alaughingsoundcorrespondingtoaHaHaHahapticsignal),animagecomponentcorrespondingtoahapticsignal(e.g.,acharactergivingathumbsdownforanopehapticsignal),oracombinationthereof.

Inotherinstances,wheretheartificialintelligencebasedsystem1020predictsahapticemojiorhapticsignal,thelearningmodule1025takesasinputthehapticsignal(orcorrespondinghapticemojiinformation)anddetermines,usingoneormorerules,logic,ormachine-learningmodels,additionalinformation1030(e.g.,anaudiocomponentoranimagecomponent)thatcouldbeusedtosupplementthehapticsignal.Forexample,thelearningmodule1025mayuseoneormorerules,logic,ormachine-learningmodelstodetermineatextcomponent,anaudiocomponentand/oranimagecomponentthatcouldbeusedtosupplementthehapticsignal(orcorrespondinghapticemojiinformation),thenretrievethetextcomponent,theaudiocomponentand/ortheimagecomponentfromthedatastoragedevice1035orasecondarydatastoragedevice1040(e.g.,aremotestoragedeviceorthird-partystoragedevice)andforwardalongwiththehapticcomponent.

Thebenefitsandadvantagesofthisapproacharethatthereceivingusermaymoreeasilylearnthehapticoutputpatternsandassociatedmeaningbasedonassociatedvisualand/oraudiocontext.Forexample,thelearningmodule1025maybeconfiguredtotransmitthehapticsignalalongwithavisualand/oraudiosignaltothereceivingusersuchthatwhentheuserfeelsthehapticoutput1010basedonthehapticsignaltheuserconcurrentlyvisualizesonadisplaythevisualsignal(e.g.,avisualemoji)and/orhearstheaudiosignal,theuserlearnstoassociatethehapticoutputpatternwithanassociatedvisualand/oraudiocontext.Thevisualand/oraudiosignalmaybeobtainedaspartoftheadditionalinformation1030andassociatedandtransmittedwiththehapticsignalbythelearningmodule1025.Additionallyoralternatively,thevisualand/oraudiosignalmaybegeneratedbasedontheadditionalinformation1030bythelearningmodule1025,andassociatedandtransmittedwiththehapticsignalbythelearningmodule1025.

FIG.11isaflowchartillustratingaprocess1100forsupplementingahapticsignalwithadditionalinformationtofacilitateauserlearningahapticoutputinaccordancewithvariousembodiments.TheprocessingdepictedinFIG.11maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.11anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.11depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorder,orsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,6A-6C,8,or10theprocessingdepictedinFIG.11maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.

Atstep1105,inputdataisobtainedfromaclientsystemofafirstuser(e.g.,capturedusingoneormoresensors).Insomeinstances,theoneormoresensorscaptureinputdataincludingimagesofavisualfieldofthefirstuserwearingahead-mounteddevicecomprisingadisplaytodisplaycontenttothefirstuser.Theinputdataincludes:(i)dataregardingactivityoftheuserinanextendedrealityenvironment(e.g.,imagesandaudiooftheuserinteractinginthephysicalenvironmentand/orthevirtualenvironment),(ii)datafromexternalsystems,or(iii)both.Insomeinstances,thedataregardingactivityoftheuserincludestext,audio,imagesorvideo,sensordata,orthelike.

Atstep1110,anemoji(e.g.,ahapticemoji)orhapticsignalisidentifiedfromalexiconofemojisoranartificialintelligencebasedsystem,asdescribedwithrespecttoFIGS.6A-6C,7,8,and9.

Atstep1115,additionalinformationisobtainedbasedontheemojiorhapticsignal.Theadditionalinformationmayincludeatextdescriptionofthetouchcommunicationconveyedbythehapticsignal(e.g.,forawavehapticsignal,thetextcouldsay“sendinguser”waveshelloto“receivinguser”),anaudiocomponentcorrespondingtoahapticsignal(e.g.,alaughingsoundcorrespondingtoaHaHaHahapticsignal),animagecomponentcorrespondingtoahapticsignal(e.g.,acharactergivingathumbsdownforanopehapticsignal),oracombinationthereof.

Atstep1120,thehapticsignalandadditionalinformationaretransmittedtoadeviceofaseconduser.Insomeinstances,thedeviceisanotherhead-mounteddevice.Thedeviceisconfiguredtoconvertthehapticsignaltothehapticoutputbasedontheparameterinformationinordertoconveyatouchmessageasatleastpartoftheelectroniccommunicationtotheseconduserviaahapticdevice.Thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji),whichisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedontheadditionalinformation(e.g.,thetext,theimageorvideo,theaudio,oranycombinationthereof).

ReceivingtheHapticSignalandGeneratingtheHapticOutput

Theprocessor1215readsinstructionsfromthememory1230andexecutesthemtoperformvariousoperations.Theprocessor1215maybeembodiedusinganysuitableinstructionsetarchitectureandmaybeconfiguredtoexecuteinstructionsdefinedinthatinstructionsetarchitecture.Theprocessor1215maybegeneral-purposeorembeddedprocessorsusinganyofavarietyofinstructionsetarchitectures(ISAs),suchasthex86,PowerPC,SPARC,RISC,ARMorMIPSISAs,oranyothersuitableISA.AlthoughasingleprocessorisillustratedinFIG.12,thesignalgenerator1200mayincludemultipleprocessors.

Thehapticinterfacecircuit1220isacircuitthatinterfaceswiththecutaneousactuators1205.Thehapticinterfacecircuit1220generatesactuatorsignals1210basedoncommandsfromtheprocessor1215.Forthispurpose,thehapticinterfacecircuit1220mayinclude,forexample,adigital-to-analogconverter(DAC)forconvertingdigitalsignalsintoanalogsignals.Thehapticinterfacecircuit1220mayalsoincludeanamplifiertoamplifytheanalogsignalsfortransmittingtheactuatorsignals1210overcablesbetweenthesignalgenerator1200andthecutaneousactuators1205.Insomeembodiments,thehapticinterfacecircuit1220communicateswiththeactuators1205wirelessly.Insuchembodiments,thehapticinterfacecircuit1220includescomponentsformodulatingwirelesssignalsfortransmittingtotheactuator1205overwirelesschannels.

Thecommunicationmodule1225(e.g.,receivingdevice570describedwithrespecttoFIG.5)ishardwareorcombinationsofhardware,firmwareandsoftwareforcommunicatingwithothercomputingdevices.Thecommunicationmodule1225may,forexample,enablethesignalgenerator1200tocommunicatewithasocialnetworkingsystem,atransmittingorsendingclientsystem,oranelectroniccommunicationsourceoverthenetwork.Thecommunicationmodule1225maybeembodiedasanetworkcard.Thememory1230isanon-transitorycomputerreadablestoragemediumforstoringsoftwaremodules.Softwaremodulesstoredinthememory1230mayinclude,amongothers,applications1240andahapticsignalprocessor1245(e.g.,thesignalprocessor547describedwithrespecttoFIG.5).Thememory1230mayincludeothersoftwaremodulesnotillustratedinFIG.8,suchasanoperatingsystem.Theapplications1240mayusehapticoutputviathecutaneousactuators1205toperformvariousfunctions,suchaselectroniccommunication,gaming,andentertainment.

Thesignalgenerator1200asillustratedinFIG.12ismerelyillustrativeandvariousmodificationmaybemadetothesignalgenerator1200.Forexample,insteadofembodyingthesignalgenerator1200asasoftwaremodule,thesignalgenerator1200maybeembodiedasahardwarecircuit,oracombinationofhardwarecircuitsandsoftwaremodules.

FIG.13isaflowchartillustratingaprocess1300forgeneratingahapticoutputinaccordancewithvariousembodiments.TheprocessingdepictedinFIG.13maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.13anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.13depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorder,orsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,6A-6C,8,10,or12theprocessingdepictedinFIG.13maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.

Atstep1315,theoneormoreactuatorsignalsaregeneratedbasedontheparametersdeterminedfortheoneormoreactuatorsignals.Thegeneratingoftheoneormoreactuatorsignalsmayincludeperformingdigitaltoanalogconversionofthehapticsignaland/oroneormoreactuatorsignals.

Atstep1320,theoneormoreactuatorsignalsaretransmittedtooneormorecorrespondingcutaneousactuators.

Atstep1325,oneormorecutaneousactuatorsgeneratehapticoutputinaccordancewiththecorrespondingoneormoreactuatorsignals,whichcauseoneormoreofvarioustypesofcutaneousfeedback,includingvibration,force,traction,texture,and/ortemperatureontheseconduser'sbody.Insomeinstances,thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji),whichisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedontheadditionalinformation(e.g.,thetext,theimageorvideo,theaudio,oranycombinationthereof).

ADDITIONALCONSIDERATIONS

Althoughspecificexampleshavebeendescribed,variousmodifications,alterations,alternativeconstructions,andequivalentsarepossible.Examplesarenotrestrictedtooperationwithincertainspecificdataprocessingenvironments,butarefreetooperatewithinapluralityofdataprocessingenvironments.Additionally,althoughcertainexampleshavebeendescribedusingaparticularseriesoftransactionsandsteps,itshouldbeapparenttothoseskilledintheartthatthisisnotintendedtobelimiting.Althoughsomeflowchartsdescribeoperationsasasequentialprocess,manyoftheoperationsmaybeperformedinparallelorconcurrently.Inaddition,theorderoftheoperationsmayberearranged.Aprocessmayhaveadditionalstepsnotincludedinthefigure.Variousfeaturesandaspectsoftheabove-describedexamplesmaybeusedindividuallyorjointly.

Further,whilecertainexampleshavebeendescribedusingaparticularcombinationofhardwareandsoftware,itshouldberecognizedthatothercombinationsofhardwareandsoftwarearealsopossible.Certainexamplesmaybeimplementedonlyinhardware,oronlyinsoftware,orusingcombinationsthereof.Thevariousprocessesdescribedhereinmaybeimplementedonthesameprocessorordifferentprocessorsinanycombination.

Wheredevices,systems,componentsormodulesaredescribedasbeingconfiguredtoperformcertainoperationsorfunctions,suchconfigurationmaybeaccomplished,forexample,bydesigningelectroniccircuitstoperformtheoperation,byprogrammingprogrammableelectroniccircuits(suchasmicroprocessors)toperformtheoperationsuchasbyexecutingcomputerinstructionsorcode,orprocessorsorcoresprogrammedtoexecutecodeorinstructionsstoredonanon-transitorymemorymedium,oranycombinationthereof.Processesmaycommunicateusingavarietyoftechniquesincludingbutnotlimitedtoconventionaltechniquesforinter-processcommunications,anddifferentpairsofprocessesmayusedifferenttechniques,orthesamepairofprocessesmayusedifferenttechniquesatdifferenttimes.

Specificdetailsaregiveninthisdisclosuretoprovideathoroughunderstandingoftheexamples.However,examplesmaybepracticedwithoutthesespecificdetails.Forexample,well-knowncircuits,processes,algorithms,structures,andtechniqueshavebeenshownwithoutunnecessarydetailinordertoavoidobscuringtheexamples.Thisdescriptionprovidesexampleexamplesonly,andisnotintendedtolimitthescope,applicability,orconfigurationofotherexamples.Rather,theprecedingdescriptionoftheexampleswillprovidethoseskilledintheartwithanenablingdescriptionforimplementingvariousexamples.Variouschangesmaybemadeinthefunctionandarrangementofelements.

Thespecificationanddrawingsare,accordingly,toberegardedinanillustrativeratherthanarestrictivesense.Itwill,however,beevidentthatadditions,subtractions,deletions,andothermodificationsandchangesmaybemadethereuntowithoutdepartingfromthebroaderspiritandscopeassetforthintheclaims.Thus,althoughspecificexampleshavebeendescribed,thesearenotintendedtobelimiting.Variousmodificationsandequivalentsarewithinthescopeofthefollowingclaims.

Intheforegoingdescription,forthepurposesofillustration,methodsweredescribedinaparticularorder.Itshouldbeappreciatedthatinalternateexamples,themethodsmaybeperformedinadifferentorderthanthatdescribed.Itshouldalsobeappreciatedthatthemethodsdescribedabovemaybeperformedbyhardwarecomponentsormaybeembodiedinsequencesofmachine-executableinstructions,whichmaybeusedtocauseamachine,suchasageneral-purposeorspecial-purposeprocessororlogiccircuitsprogrammedwiththeinstructionstoperformthemethods.Thesemachine-executableinstructionsmaybestoredononeormoremachinereadablemediums,suchasCD-ROMsorothertypeofopticaldisks,floppydiskettes,ROMs,RAMs,EPROMs,EEPROMs,magneticoropticalcards,flashmemory,orothertypesofmachine-readablemediumssuitableforstoringelectronicinstructions.Alternatively,themethodsmaybeperformedbyacombinationofhardwareandsoftware.

Wherecomponentsaredescribedasbeingconfiguredtoperformcertainoperations,suchconfigurationmaybeaccomplished,forexample,bydesigningelectroniccircuitsorotherhardwaretoperformtheoperation,byprogrammingprogrammableelectroniccircuits(e.g.,microprocessors,orothersuitableelectroniccircuits)toperformtheoperation,oranycombinationthereof.

Whileillustrativeexamplesoftheapplicationhavebeendescribedindetailherein,itistobeunderstoodthattheinventiveconceptsmaybeotherwisevariouslyembodiedandemployed,andthattheappendedclaimsareintendedtobeconstruedtoincludesuchvariations,exceptaslimitedbythepriorart.

THE END
1.运行环境指南this is my pagehttp://jwgl.hbc.edu.cn/
2.python毕设中小学精品课程网络资源系统8sa61.程序+论文本研究旨在设计与实现一个功能全面、操作便捷的中小学精品课程网络资源系统,以满足中小学生自主学习和教育工作者资源分享的需求。通过该系统,学生可以根据自己的学习进度和兴趣,选择适合自己的精品课程进行在线学习;教师则可以发布自己的课程,分享教学经验,与其他教育工作者进行交流和合作。同时,系统还将提供会员管理、课题https://blog.csdn.net/sheji713/article/details/144281026
3.管理系统网页模板,纯js学生管理系统网页模板交互设计命令提示符在当今数字化时代,管理系统的网页模板成为了企业运营不可或缺的一部分。本文将深入探讨管理系统网页模板的重要性、设计原则以及如何挑选合适的模板,以提升企业效率和用户体验。 成品网站模板:https://www.91084.com/ (已发布1000+款) 管理系统网页模板的重要性 管理系统网页模板是企业数字化转型的基石。它们不仅能够提https://m.163.com/dy/article/JIQ8QU9V05568SD1.html
4.OnlineLearningSystemTheSims4ModsLearn up to 40+ Skills (incl. Vet & Hidden Skills) online via the Online Learning System (OLS)https://www.curseforge.com/sims4/mods/online-learning-system
5.OnlineLearningLearn what to expect from your online, live online, and hybrid classes: before the term starts the first day of class the first week of class 4) Help & Common Questions Everyone needs help with online learning at some point. Let's get you the support you need! https://www.swccd.edu/locations/online-learning/index.aspx
6.JOLTMERLOT Journal of Online Learning and Teaching Vol. 5, No. 2, June 2009 Integrating Online Multimedia into College Course and Classroom: With However, many distributors are now offering digital licenses or closed-system streaming rights for such purposes along with sale of their videos, https://jolt.merlot.org/vol5no2/miller_0609.htm
7.2016华南理工大学网络教育专升本入学考试《大学英语》测试4. Which of the following is the main factor that makes it difficult to define students' perceptions of online learning definitely? A. Learners' varied locations. B. Learners' varied characteristics. C. Learners' varied communication skills. http://www.5184pass.com/aspcms/news/2016-8-8/4529.html
8.ApplicationsofreinforcementlearninginenergysystemsPublications of the energy system domain are divided into 11 subgroups and reviewed. ? Many publications report 10–20% performance improvement. ? Deep learning techniques and state-of-the-art actor-critic methods were not used by many articles. ? Batch reinforcement learning algorithms havehttps://www.sciencedirect.com/science/article/pii/S1364032120309023
9.FrontiersSelfThis model relies on a physical model of the vocal tract, the auditory system and the agent's motor control as well as vocalizations of socialMoreover, we adapted this online version of EM to introduce a learning rate parameter α which decreases logarithmically from 0.1 to 0.01 over https://www.frontiersin.org/cognitive_science/10.3389/fpsyg.2013.01006/abstract
10.MakingContentUsableforPeoplewithCognitiveandLearningIt gives advice on how to make content usable for people with cognitive and learning disabilitiesI need to understand the consequences of what I do online. Related Personas: Alison, George, Use a clear and easy layout to help users navigate the system easily. For example: Make https://www.w3.org/TR/coga-usable/
11.台逥IAView工業組態系統台达DIAVIew 组态软件 SCADA(Supervisory Control And Data Acquisition)工业组态系统是一套架构在PC计算机上具有实时系统监控,数据撷取和分析功能的自动化管理系统,可协助管理者 采集整厂的数据及规划产线可视化的管理接口,让客户轻松实现远程监控、系统管理和全厂信息https://www.deltaww.com.cn/DIAView/zh-TW/OnlineLearning/Player/aa1eed27-55dd-4bf0-af14-dc54441b85c5
12.ImpactofExtensionsonBrowserPerformance:AnEmpirical– The stabilized energy consumption measures the energy consumption of the CPU and RAM by the entire system in joules during a fixed period of onlinelibrary.wiley.com/doi/abs/10.1002/cem.3034. Hackeling, Gavin (2017). Mastering Machine Learning with scikit-learn. Packt Publishing Ltd. http://arxiv.org/pdf/2404.06827
13.GitHubwildcard/awesomeCanvas LMS - Canvas is the trusted, open-source learning management system (LMS) that is revolutionizing the way we educate. (Demo, Source Code) AGPL-3.0 Ruby Chamilo LMS - Chamilo LMS allows you to create a virtual campus for the provision of online or semi-online training. (Source Codehttps://github.com/wildcard/awesome-selfhosted/
14.19.Manyoftheworld'scountrieshavereachedorareA.to improve the health care system B.to increase RNI C.to increase employment D.to increase migration 31.The main idea of the last paragraphA.It is first offered as part of the edX learning program. B.It is another free MIT-Harvard online learning program. http://www.1010jiajiao.com/gzyy/shiti_page_137828
15.forOutdoorAirQualityModelling:ASystematicReviewOn the other hand, Eulerian models use a gridded system that monitors atmospheric properties (e.g., concentration of chemical tracers, temperature The 2nd step involves a sequential learning phase, in order to proceed with an online (daily) update of the MLR and ELM, and only seasonally https://www.mdpi.com/2076-3417/8/12/2570
16.TheCognitiveAffectiveonline and visiting social media sites for more than 6 hours a day (Twenge et al.2019). Digital learning material is particularly characterized information within digital learning material is processed during a learning process within the working memory system to transfer it into long-term memoryhttps://link.springer.com/article/10.1007/s10648-021-09626-5
17.Adoptionofblendedlearning:ChineseuniversitystudentsAgainst the backdrop of the deep integration of the Internet with learning, blended learning offers the advantages of combining online and face-to-face learning to enrich the learning experience and improve knowledge management. Therefore, the objective of this present study is twofold: a. to fillhttps://www.nature.com/articles/s41599-023-01904-7